[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <164914778225.389.644461284127262702.tip-bot2@tip-bot2>
Date: Tue, 05 Apr 2022 08:36:22 -0000
From: "tip-bot2 for Waiman Long" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Waiman Long <longman@...hat.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/rwsem: No need to check for handoff bit
if wait queue empty
The following commit has been merged into the locking/core branch of tip:
Commit-ID: f9e21aa9e6fb11355e54c8949a390d49ca21cde1
Gitweb: https://git.kernel.org/tip/f9e21aa9e6fb11355e54c8949a390d49ca21cde1
Author: Waiman Long <longman@...hat.com>
AuthorDate: Tue, 22 Mar 2022 11:20:57 -04:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 05 Apr 2022 10:24:34 +02:00
locking/rwsem: No need to check for handoff bit if wait queue empty
Since commit d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling
more consistent"), the handoff bit is always cleared if the wait queue
becomes empty. There is no need to check for RWSEM_FLAG_HANDOFF when
the wait list is known to be empty.
Signed-off-by: Waiman Long <longman@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20220322152059.2182333-2-longman@redhat.com
---
kernel/locking/rwsem.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index acde5d6..b077b1b 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -977,12 +977,11 @@ queue:
if (list_empty(&sem->wait_list)) {
/*
* In case the wait queue is empty and the lock isn't owned
- * by a writer or has the handoff bit set, this reader can
- * exit the slowpath and return immediately as its
- * RWSEM_READER_BIAS has already been set in the count.
+ * by a writer, this reader can exit the slowpath and return
+ * immediately as its RWSEM_READER_BIAS has already been set
+ * in the count.
*/
- if (!(atomic_long_read(&sem->count) &
- (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
+ if (!(atomic_long_read(&sem->count) & RWSEM_WRITER_MASK)) {
/* Provide lock ACQUIRE */
smp_acquire__after_ctrl_dep();
raw_spin_unlock_irq(&sem->wait_lock);
Powered by blists - more mailing lists