[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200807191636.75045-2-sultan@kerneltoast.com>
Date: Fri, 7 Aug 2020 12:16:36 -0700
From: Sultan Alsawaf <sultan@...neltoast.com>
To: unlisted-recipients:; (no To-header on input)
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
linux-kernel@...r.kernel.org,
Sultan Alsawaf <sultan@...neltoast.com>
Subject: [PATCH 2/2] locking/rwsem: Don't hog RCU read lock while optimistically spinning
From: Sultan Alsawaf <sultan@...neltoast.com>
There's no reason to hold an RCU read lock the entire time while
optimistically spinning for a rwsem. This can needlessly lengthen RCU
grace periods and slow down synchronize_rcu() when it doesn't brute
force the RCU grace period via rcupdate.rcu_expedited=1.
Signed-off-by: Sultan Alsawaf <sultan@...neltoast.com>
---
kernel/locking/rwsem.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index f11b9bd3431d..a1e3ceb254d1 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -723,8 +723,10 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
if (state != OWNER_WRITER)
return state;
- rcu_read_lock();
for (;;) {
+ bool same_owner;
+
+ rcu_read_lock();
/*
* When a waiting writer set the handoff flag, it may spin
* on the owner as well. Once that writer acquires the lock,
@@ -732,27 +734,32 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
* handoff bit is set.
*/
new = rwsem_owner_flags(sem, &new_flags);
- if ((new != owner) || (new_flags != flags)) {
- state = rwsem_owner_state(new, new_flags, nonspinnable);
- break;
- }
/*
- * Ensure we emit the owner->on_cpu, dereference _after_
- * checking sem->owner still matches owner, if that fails,
+ * Ensure sem->owner still matches owner. If that fails,
* owner might point to free()d memory, if it still matches,
* the rcu_read_lock() ensures the memory stays valid.
*/
- barrier();
+ same_owner = new == owner && new_flags == flags;
+ if (same_owner && !owner_on_cpu(owner))
+ state = OWNER_NONSPINNABLE;
+ rcu_read_unlock();
- if (need_resched() || !owner_on_cpu(owner)) {
+ if (!same_owner) {
+ state = rwsem_owner_state(new, new_flags, nonspinnable);
+ break;
+ }
+
+ if (state == OWNER_NONSPINNABLE)
+ break;
+
+ if (need_resched()) {
state = OWNER_NONSPINNABLE;
break;
}
cpu_relax();
}
- rcu_read_unlock();
return state;
}
--
2.28.0
Powered by blists - more mailing lists