[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230327202413.1955856-8-longman@redhat.com>
Date: Mon, 27 Mar 2023 16:24:12 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH v2 7/8] locking/rwsem: Use the force
From: Peter Zijlstra <peterz@...radead.org>
Now that the writer adjustment is done from the wakeup side and
HANDOFF guarantees spinning/stealing is disabled, use the combined
guarantee it ignore spurious READER_BIAS and directly claim the lock.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/locking/rwsem.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index ee8861effcc2..7bd26e64827a 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -377,8 +377,8 @@ rwsem_add_waiter(struct rw_semaphore *sem, struct rwsem_waiter *waiter)
/*
* Remove a waiter from the wait_list and clear flags.
*
- * Both rwsem_reader_wake() and rwsem_try_write_lock() contain a full 'copy' of
- * this function. Modify with care.
+ * Both rwsem_{reader,writer}_wake() and rwsem_try_write_lock() contain a full
+ * 'copy' of this function. Modify with care.
*
* Return: true if wait_list isn't empty and false otherwise
*/
@@ -479,8 +479,33 @@ static void rwsem_writer_wake(struct rw_semaphore *sem,
struct rwsem_waiter *waiter,
struct wake_q_head *wake_q)
{
- if (rwsem_try_write_lock(sem, waiter))
- rwsem_waiter_wake(waiter, wake_q);
+ long count = atomic_long_read(&sem->count);
+
+ /*
+ * Since rwsem_mark_wake() is only called (with WAKE_ANY) when
+ * the lock is unlocked, and the HANDOFF bit guarantees that
+ * all spinning / stealing is disabled, it is posssible to
+ * unconditionally claim the lock -- any READER_BIAS will be
+ * temporary.
+ */
+ if ((count & (RWSEM_FLAG_HANDOFF|RWSEM_WRITER_LOCKED)) == RWSEM_FLAG_HANDOFF) {
+ unsigned long adjustment = RWSEM_WRITER_LOCKED - RWSEM_FLAG_HANDOFF;
+
+ if (list_is_singular(&sem->wait_list))
+ adjustment -= RWSEM_FLAG_WAITERS;
+
+ atomic_long_add(adjustment, &sem->count);
+ /*
+ * Have rwsem_writer_wake() fully imply rwsem_del_waiter() on
+ * success.
+ */
+ list_del(&waiter->list);
+ atomic_long_set(&sem->owner, (long)waiter->task);
+
+ } else if (!rwsem_try_write_lock(sem, waiter))
+ return;
+
+ rwsem_waiter_wake(waiter, wake_q);
}
static void rwsem_reader_wake(struct rw_semaphore *sem,
--
2.31.1
Powered by blists - more mailing lists