[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230506062934.69652-1-qiuxu.zhuo@intel.com>
Date: Sat, 6 May 2023 14:29:34 +0800
From: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>
Cc: Qiuxu Zhuo <qiuxu.zhuo@...el.com>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org
Subject: [PATCH 1/1] locking/qspinlock: Fix state-transition changes in comments
1. There may be concurrent locker CPUs to set the qspinlock pending bit.
The first CPU (called pending CPU) of these CPUs sets the pending
bit to make the state transition (the qspinlock pending bit is set):
0,0,* -> 0,1,*
The rest of these CPUs are queued to the MCS queue to make the state
transition (the qspinlock tail is updated):
0,1,* -> *,1,*
The pending CPU waits until the locker owner goes away to make
the state transition (the qspinlock locked field is set to zero):
*,1,* -> *,1,0
The pending CPU takes the ownership and clears the pending bit
to make the state transition:
*,1,0 -> *,0,1
2. The header of the MCS queue takes the ownership and calls set_locked()
to make the state transition:
*,*,0 -> *,*,1
Fix the state-transition changes above in the code comments accordingly.
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
---
kernel/locking/qspinlock.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ebe6b8ec7cb3..efebbf19f887 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -257,7 +257,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
* set_locked - Set the lock bit and own the lock
* @lock: Pointer to queued spinlock structure
*
- * *,*,0 -> *,0,1
+ * *,*,0 -> *,*,1
*/
static __always_inline void set_locked(struct qspinlock *lock)
{
@@ -348,7 +348,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* trylock || pending
*
- * 0,0,* -> 0,1,* -> 0,0,1 pending, trylock
+ * 0,0,* -> 0,1,* -> ... -> *,0,1 pending, trylock
*/
val = queued_fetch_set_pending_acquire(lock);
@@ -358,6 +358,8 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* Undo and queue; our setting of PENDING might have made the
* n,0,0 -> 0,0,0 transition fail and it will now be waiting
* on @next to become !NULL.
+ *
+ * 0,1,* -> *,1,*
*/
if (unlikely(val & ~_Q_LOCKED_MASK)) {
@@ -371,7 +373,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* We're pending, wait for the owner to go away.
*
- * 0,1,1 -> *,1,0
+ * *,1,* -> *,1,0
*
* this wait loop must be a load-acquire such that we match the
* store-release that clears the locked bit and create lock
@@ -385,7 +387,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
/*
* take ownership and clear the pending bit.
*
- * 0,1,0 -> 0,0,1
+ * *,1,0 -> *,0,1
*/
clear_pending_set_locked(lock);
lockevent_inc(lock_pending);
--
2.17.1
Powered by blists - more mailing lists