[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1519814372-19941-1-git-send-email-parri.andrea@gmail.com>
Date: Wed, 28 Feb 2018 11:39:32 +0100
From: Andrea Parri <parri.andrea@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Andrea Parri <parri.andrea@...il.com>,
Alan Stern <stern@...land.harvard.edu>,
Will Deacon <will.deacon@....com>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Akira Yokosawa <akiyks@...il.com>
Subject: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().
Document this semantics.
Signed-off-by: Andrea Parri <parri.andrea@...il.com>
Cc: Alan Stern <stern@...land.harvard.edu>
Cc: Will Deacon <will.deacon@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: David Howells <dhowells@...hat.com>
Cc: Jade Alglave <j.alglave@....ac.uk>
Cc: Luc Maranget <luc.maranget@...ia.fr>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@...il.com>
---
include/linux/spinlock.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d2584..2639fdc9a916c 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
})
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Return: 1, if @lock is (found to be) locked; 0, otherwise.
+ */
static __always_inline int spin_is_locked(spinlock_t *lock)
{
return raw_spin_is_locked(&lock->rlock);
--
2.7.4
Powered by blists - more mailing lists