[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200807191636.75045-1-sultan@kerneltoast.com>
Date: Fri, 7 Aug 2020 12:16:35 -0700
From: Sultan Alsawaf <sultan@...neltoast.com>
To: unlisted-recipients:; (no To-header on input)
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
linux-kernel@...r.kernel.org,
Sultan Alsawaf <sultan@...neltoast.com>
Subject: [PATCH 1/2] locking/mutex: Don't hog RCU read lock while optimistically spinning
From: Sultan Alsawaf <sultan@...neltoast.com>
There's no reason to hold an RCU read lock the entire time while
optimistically spinning for a mutex lock. This can needlessly lengthen
RCU grace periods and slow down synchronize_rcu() when it doesn't brute
force the RCU grace period via rcupdate.rcu_expedited=1.
Signed-off-by: Sultan Alsawaf <sultan@...neltoast.com>
---
kernel/locking/mutex.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 5352ce50a97e..cc5676712458 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -552,21 +552,31 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
{
bool ret = true;
- rcu_read_lock();
- while (__mutex_owner(lock) == owner) {
+ for (;;) {
+ unsigned int cpu;
+ bool same_owner;
+
/*
- * Ensure we emit the owner->on_cpu, dereference _after_
- * checking lock->owner still matches owner. If that fails,
+ * Ensure lock->owner still matches owner. If that fails,
* owner might point to freed memory. If it still matches,
* the rcu_read_lock() ensures the memory stays valid.
*/
- barrier();
+ rcu_read_lock();
+ same_owner = __mutex_owner(lock) == owner;
+ if (same_owner) {
+ ret = owner->on_cpu;
+ if (ret)
+ cpu = task_cpu(owner);
+ }
+ rcu_read_unlock();
+
+ if (!ret || !same_owner)
+ break;
/*
* Use vcpu_is_preempted to detect lock holder preemption issue.
*/
- if (!owner->on_cpu || need_resched() ||
- vcpu_is_preempted(task_cpu(owner))) {
+ if (need_resched() || vcpu_is_preempted(cpu)) {
ret = false;
break;
}
@@ -578,7 +588,6 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
cpu_relax();
}
- rcu_read_unlock();
return ret;
}
--
2.28.0
Powered by blists - more mailing lists