[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181220085903.616647147@linuxfoundation.org>
Date: Thu, 20 Dec 2018 10:18:13 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Will Deacon <will.deacon@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
andrea.parri@...rulasolutions.com, longman@...hat.com,
Ingo Molnar <mingo@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 4.19 01/67] locking/qspinlock: Re-order code
4.19-stable review patch. If anyone has any objections, please let me know.
------------------
commit 53bf57fab7321fb42b703056a4c80fc9d986d170 upstream.
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL)
such that we loose the indent. This also result in a more natural code
flow IMO.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Will Deacon <will.deacon@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: andrea.parri@...rulasolutions.com
Cc: longman@...hat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/locking/qspinlock.c | 56 ++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 29 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index bfaeb05123ff..ec343276f975 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -330,39 +330,37 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* 0,0,1 -> 0,1,1 ; pending
*/
val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
- if (!(val & ~_Q_LOCKED_MASK)) {
- /*
- * We're pending, wait for the owner to go away.
- *
- * *,1,1 -> *,1,0
- *
- * this wait loop must be a load-acquire such that we match the
- * store-release that clears the locked bit and create lock
- * sequentiality; this is because not all
- * clear_pending_set_locked() implementations imply full
- * barriers.
- */
- if (val & _Q_LOCKED_MASK) {
- atomic_cond_read_acquire(&lock->val,
- !(VAL & _Q_LOCKED_MASK));
- }
-
- /*
- * take ownership and clear the pending bit.
- *
- * *,1,0 -> *,0,1
- */
- clear_pending_set_locked(lock);
- qstat_inc(qstat_lock_pending, true);
- return;
+ /*
+ * If we observe any contention; undo and queue.
+ */
+ if (unlikely(val & ~_Q_LOCKED_MASK)) {
+ if (!(val & _Q_PENDING_MASK))
+ clear_pending(lock);
+ goto queue;
}
/*
- * If pending was clear but there are waiters in the queue, then
- * we need to undo our setting of pending before we queue ourselves.
+ * We're pending, wait for the owner to go away.
+ *
+ * 0,1,1 -> 0,1,0
+ *
+ * this wait loop must be a load-acquire such that we match the
+ * store-release that clears the locked bit and create lock
+ * sequentiality; this is because not all
+ * clear_pending_set_locked() implementations imply full
+ * barriers.
+ */
+ if (val & _Q_LOCKED_MASK)
+ atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
+
+ /*
+ * take ownership and clear the pending bit.
+ *
+ * 0,1,0 -> 0,0,1
*/
- if (!(val & _Q_PENDING_MASK))
- clear_pending(lock);
+ clear_pending_set_locked(lock);
+ qstat_inc(qstat_lock_pending, true);
+ return;
/*
* End of pending bit optimistic spinning and beginning of MCS
--
2.19.1
Powered by blists - more mailing lists