Some paths want to check a spinlock is held, others want to check if its not held, it depends on the context. So returning 1 on UP would break a bunch of code as well. On Mon, Jan 14, 2013 at 12:18 AM, Jerry Chu wrote: > > > On Sun, Jan 13, 2013 at 7:05 PM, Eric Dumazet wrote: > >> Oh well yes, this doesnt quite work on !SMP. >> > > Strange - how would one assert a spin lock is held, and obviously only for > SMP? (I almost think arch_spin_is_locked(lock) should be ((void)(lock), > 1) for UP for the purpose of assertion...) > > Also it looks like there are bunch of other places spin_is_locked() > assertion is made in the source tree. (Perhaps they are only configured for > MP?) > > Thanks, > > Jerry > > >> And this kind of bug is frequent.... >> >> See following example : >> >> commit b9980cdcf2524c5fe15d8cbae9c97b3ed6385563 >> Author: Hugh Dickins >> Date: Wed Feb 8 17:13:40 2012 -0800 >> >> mm: fix UP THP spin_is_locked BUGs >> >> Fix CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_SMP=n CONFIG_DEBUG_VM=y >> CONFIG_DEBUG_SPINLOCK=n kernel: spin_is_locked() is then always false, >> and so triggers some BUGs in Transparent HugePage codepaths. >> >> asm-generic/bug.h mentions this problem, and provides a >> WARN_ON_SMP(x); >> but being too lazy to add VM_BUG_ON_SMP, BUG_ON_SMP, WARN_ON_SMP_ONCE, >> VM_WARN_ON_SMP_ONCE, just test NR_CPUS != 1 in the existing >> VM_BUG_ONs. >> >> Signed-off-by: Hugh Dickins >> Cc: Andrea Arcangeli >> Cc: >> Signed-off-by: Andrew Morton >> Signed-off-by: Linus Torvalds >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index b3ffc21..91d3efb 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2083,7 +2083,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) >> { >> struct mm_struct *mm = mm_slot->mm; >> >> - VM_BUG_ON(!spin_is_locked(&khugepaged_mm_lock)); >> + VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock)); >> >> >> >> >> On Sun, Jan 13, 2013 at 1:39 PM, Felix Fietkau wrote: >> >>> On 2013-01-13 7:03 PM, Eric Dumazet wrote: >>> > I suspect a bug in the spin_is_locked() implementation on your arch, as >>> > he socket lock should be held at this point. >>> I don't think this is an arch implementation bug, this probably happens >>> on all !SMP systems. See this bit from include/linux/spinlock_up.h: >>> >>> #define arch_spin_is_locked(lock) ((void)(lock), 0) >>> >>> - Felix >>> >>> >> >