[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1523895165-17576-4-git-send-email-paulmck@linux.vnet.ibm.com>
Date: Mon, 16 Apr 2018 09:12:32 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Cc: mingo@...nel.org, stern@...land.harvard.edu,
parri.andrea@...il.com, will.deacon@....com, peterz@...radead.org,
boqun.feng@...il.com, npiggin@...il.com, dhowells@...hat.com,
j.alglave@....ac.uk, luc.maranget@...ia.fr, akiyks@...il.com,
Andrea Parri <andrea.parri@...rulasolutions.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: [PATCH tools/memory-model 04/17] locking: Document the semantics of spin_is_locked()
From: Andrea Parri <andrea.parri@...rulasolutions.com>
There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().
A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.
[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
https://marc.info/?l=linux-kernel&m=152042843808540&w=2
https://marc.info/?l=linux-kernel&m=152043346110262&w=2
Co-Developed-by: Andrea Parri <andrea.parri@...rulasolutions.com>
Co-Developed-by: Alan Stern <stern@...land.harvard.edu>
Co-Developed-by: David Howells <dhowells@...hat.com>
Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
Signed-off-by: Alan Stern <stern@...land.harvard.edu>
Signed-off-by: David Howells <dhowells@...hat.com>
Cc: Will Deacon <will.deacon@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: Jade Alglave <j.alglave@....ac.uk>
Cc: Luc Maranget <luc.maranget@...ia.fr>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@...il.com>
Cc: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Acked-by: Randy Dunlap <rdunlap@...radead.org>
---
include/linux/spinlock.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
})
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
static __always_inline int spin_is_locked(spinlock_t *lock)
{
return raw_spin_is_locked(&lock->rlock);
--
2.5.2
Powered by blists - more mailing lists