[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1522230419-12275-1-git-send-email-andrea.parri@amarulasolutions.com>
Date: Wed, 28 Mar 2018 11:46:59 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: Will Deacon <will.deacon@....com>,
Catalin Marinas <catalin.marinas@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Andrea Parri <andrea.parri@...rulasolutions.com>
Subject: [PATCH v2 for-4.17 1/3] arm64: Remove smp_mb() from arch_spin_is_locked()
Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait}
against local locks") added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.
It is however understood (and not documented) that spin_is_locked() is not
required to ensure such an ordering guarantee, guarantee that is currently
_not_ provided by all implementations/architectures, and that callers rely-
ing on such ordering should instead insert suitable memory barriers before
acting on the result of spin_is_locked().
Following a recent auditing[1] of the callsites of {,raw_}spin_is_locked()
revealing that none of these callers are relying on the ordering guarantee
anymore, this commit removes the leading smp_mb() from this primitive thus
effectively reverting 38b850a73034f.
[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
Acked-by: Will Deacon <will.deacon@....com>
Cc: Will Deacon <will.deacon@....com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
---
arch/arm64/include/asm/spinlock.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665de..26c5bd7d88d8d 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
- /*
- * Ensure prior spin_lock operations to other locks have completed
- * on this CPU before we test whether "lock" is locked.
- */
- smp_mb(); /* ^^^ */
return !arch_spin_value_unlocked(READ_ONCE(*lock));
}
--
2.7.4
Powered by blists - more mailing lists