[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1531247490-26852-1-git-send-email-longman@redhat.com>
Date: Tue, 10 Jul 2018 14:31:30 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org, Mark Ray <mark.ray@....com>,
Joe Mario <jmario@...hat.com>,
Scott Norton <scott.norton@....com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH] locking/rwsem: Take read lock immediate if empty queue with no writer
It was found that a constant stream of readers might cause the count to
go negative most of the time after an initial trigger by a writer even
if no writer was present afterward. As a result, most of the readers
would have to go through the slowpath reducing their performance.
To avoid that from happening, an additional check is added to detect
the special case that the reader in the critical section is the only
one in the wait queue and no writer is present. When that happens, it
can just have the lock and return immediately without further action.
Other incoming readers won't see a waiter is present and be forced
into the slowpath.
After the list_empty() calls, the CPU should have the lock cacheline
anyway, so an additional semaphore count check shouldn't have any
performance impact.
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/rwsem-xadd.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 3064c50..ef8a5f3 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -233,8 +233,22 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
waiter.type = RWSEM_WAITING_FOR_READ;
raw_spin_lock_irq(&sem->wait_lock);
- if (list_empty(&sem->wait_list))
+ if (list_empty(&sem->wait_list)) {
+ /*
+ * In the unlikely event that the task is the only one in
+ * the wait queue and a writer isn't present, it can have
+ * the lock and return immediately without going through
+ * the remaining slowpath code.
+ *
+ * Count won't be 0, but allowing it will probably generate
+ * better code.
+ */
+ if (unlikely(atomic_long_read(&sem->count) >= 0)) {
+ raw_spin_unlock_irq(&sem->wait_lock);
+ return sem;
+ }
adjustment += RWSEM_WAITING_BIAS;
+ }
list_add_tail(&waiter.list, &sem->wait_list);
/* we're now waiting on the lock, but no longer actively locking */
--
1.8.3.1
Powered by blists - more mailing lists