[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1522230457-12337-1-git-send-email-andrea.parri@amarulasolutions.com>
Date: Wed, 28 Mar 2018 11:47:37 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Andrea Parri <andrea.parri@...rulasolutions.com>
Subject: [PATCH v2 for-4.17 2/3] powerpc: Remove smp_mb() from arch_spin_is_locked()
Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
added an smp_mb() to arch_spin_is_locked(), in order to ensure that
Thread 0 Thread 1
spin_lock(A); spin_lock(B);
r0 = spin_is_locked(B) r1 = spin_is_locked(A);
never ends up with r0 = r1 = 0, and reported one example (in ipc/sem.c)
relying on such guarantee.
It's however understood (and undocumented) that spin_is_locked() is not
required to ensure such ordering guarantee, guarantee that is currently
_not_ provided by all implementations/arch, and that callers relying on
such ordering should instead use suitable memory barriers before acting
on the result of spin_is_locked().
Following a recent auditing[1] of the callers of {,raw_}spin_is_locked()
revealing that none of them are relying on this guarantee anymore, this
commit removes the leading smp_mb() from the primitive thus effectively
reverting 51d7d5205d338.
[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Paul Mackerras <paulus@...ba.org>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
---
arch/powerpc/include/asm/spinlock.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index b9ebc3085fb79..ecc141e3f1a73 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -67,7 +67,6 @@ static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
- smp_mb();
return !arch_spin_value_unlocked(*lock);
}
--
2.7.4
Powered by blists - more mailing lists