[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <167473312559.4906.10064248025918048573.tip-bot2@tip-bot2>
Date: Thu, 26 Jan 2023 11:38:45 -0000
From: "tip-bot2 for Waiman Long" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Waiman Long <longman@...hat.com>, Ingo Molnar <mingo@...nel.org>,
Mukesh Ojha <quic_mojha@...cinc.com>, stable@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/rwsem: Prevent non-first waiter from
spinning in down_write() slowpath
The following commit has been merged into the locking/core branch of tip:
Commit-ID: b613c7f31476c44316bfac1af7cac714b7d6bef9
Gitweb: https://git.kernel.org/tip/b613c7f31476c44316bfac1af7cac714b7d6bef9
Author: Waiman Long <longman@...hat.com>
AuthorDate: Wed, 25 Jan 2023 19:36:25 -05:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Thu, 26 Jan 2023 11:46:45 +01:00
locking/rwsem: Prevent non-first waiter from spinning in down_write() slowpath
A non-first waiter can potentially spin in the for loop of
rwsem_down_write_slowpath() without sleeping but fail to acquire the
lock even if the rwsem is free if the following sequence happens:
Non-first RT waiter First waiter Lock holder
------------------- ------------ -----------
Acquire wait_lock
rwsem_try_write_lock():
Set handoff bit if RT or
wait too long
Set waiter->handoff_set
Release wait_lock
Acquire wait_lock
Inherit waiter->handoff_set
Release wait_lock
Clear owner
Release lock
if (waiter.handoff_set) {
rwsem_spin_on_owner(();
if (OWNER_NULL)
goto trylock_again;
}
trylock_again:
Acquire wait_lock
rwsem_try_write_lock():
if (first->handoff_set && (waiter != first))
return false;
Release wait_lock
A non-first waiter cannot really acquire the rwsem even if it mistakenly
believes that it can spin on OWNER_NULL value. If that waiter happens
to be an RT task running on the same CPU as the first waiter, it can
block the first waiter from acquiring the rwsem leading to live lock.
Fix this problem by making sure that a non-first waiter cannot spin in
the slowpath loop without sleeping.
Fixes: d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling more consistent")
Signed-off-by: Waiman Long <longman@...hat.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Tested-by: Mukesh Ojha <quic_mojha@...cinc.com>
Reviewed-by: Mukesh Ojha <quic_mojha@...cinc.com>
Cc: stable@...r.kernel.org
Link: https://lore.kernel.org/r/20230126003628.365092-2-longman@redhat.com
---
kernel/locking/rwsem.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 4487359..be2df9e 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -624,18 +624,16 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
*/
if (first->handoff_set && (waiter != first))
return false;
-
- /*
- * First waiter can inherit a previously set handoff
- * bit and spin on rwsem if lock acquisition fails.
- */
- if (waiter == first)
- waiter->handoff_set = true;
}
new = count;
if (count & RWSEM_LOCK_MASK) {
+ /*
+ * A waiter (first or not) can set the handoff bit
+ * if it is an RT task or wait in the wait queue
+ * for too long.
+ */
if (has_handoff || (!rt_task(waiter->task) &&
!time_after(jiffies, waiter->timeout)))
return false;
@@ -651,11 +649,12 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
} while (!atomic_long_try_cmpxchg_acquire(&sem->count, &count, new));
/*
- * We have either acquired the lock with handoff bit cleared or
- * set the handoff bit.
+ * We have either acquired the lock with handoff bit cleared or set
+ * the handoff bit. Only the first waiter can have its handoff_set
+ * set here to enable optimistic spinning in slowpath loop.
*/
if (new & RWSEM_FLAG_HANDOFF) {
- waiter->handoff_set = true;
+ first->handoff_set = true;
lockevent_inc(rwsem_wlock_handoff);
return false;
}
Powered by blists - more mailing lists