[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201121041416.12285-4-longman@redhat.com>
Date: Fri, 20 Nov 2020 23:14:14 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>
Cc: linux-kernel@...r.kernel.org, Davidlohr Bueso <dave@...olabs.net>,
Phil Auld <pauld@...hat.com>, Waiman Long <longman@...hat.com>
Subject: [PATCH v2 3/5] locking/rwsem: Enable reader optimistic lock stealing
If the optimistic spinning queue is empty and the rwsem does not have
the handoff or write-lock bits set, it is actually not necessary to
call rwsem_optimistic_spin() to spin on it. Instead, it can steal the
lock directly as its reader bias is in the count already. If it is
the first reader in this state, it will try to wake up other readers
in the wait queue.
With this patch applied, the following were the lock event counts
after rebooting a 2-socket system and a "make -j96" kernel rebuild.
rwsem_opt_rlock=4437
rwsem_rlock=29
rwsem_rlock_steal=19
So lock stealing represents about 0.4% of all the read locks acquired
in the slow path.
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem.c | 28 ++++++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 239039d0ce21..270a0d351932 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -63,6 +63,7 @@ LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */
LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */
LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */
LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
+LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
LOCK_EVENT(rwsem_rlock_fail) /* # of failed read lock acquisitions */
LOCK_EVENT(rwsem_rlock_handoff) /* # of read lock handoffs */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index a961c5c53b70..b373990fcab8 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -957,6 +957,12 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
}
return false;
}
+
+static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
+{
+ return !osq_is_locked(&sem->osq);
+}
+
#else
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
unsigned long nonspinnable)
@@ -977,6 +983,11 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
return false;
}
+static inline bool rwsem_no_spinners(sem)
+{
+ return false;
+}
+
static inline int
rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
{
@@ -1007,6 +1018,22 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
!(count & RWSEM_WRITER_LOCKED))
goto queue;
+ /*
+ * Reader optimistic lock stealing
+ *
+ * We can take the read lock directly without doing
+ * rwsem_optimistic_spin() if the conditions are right.
+ * Also wake up other readers if it is the first reader.
+ */
+ if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
+ rwsem_no_spinners(sem)) {
+ rwsem_set_reader_owned(sem);
+ lockevent_inc(rwsem_rlock_steal);
+ if (rcnt == 1)
+ goto wake_readers;
+ return sem;
+ }
+
/*
* Save the current read-owner of rwsem, if available, and the
* reader nonspinnable bit.
@@ -1029,6 +1056,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
* Wake up other readers in the wait list if the front
* waiter is a reader.
*/
+wake_readers:
if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
raw_spin_lock_irq(&sem->wait_lock);
if (!list_empty(&sem->wait_list))
--
2.18.1
Powered by blists - more mailing lists