[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150306123233.GA9972@gmail.com>
Date: Fri, 6 Mar 2015 13:32:33 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Sasha Levin <sasha.levin@...cle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...emonkey.org.uk>,
Davidlohr Bueso <dave@...olabs.net>, jason.low2@...com,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: sched: softlockups in multi_cpu_stop
* Sasha Levin <sasha.levin@...cle.com> wrote:
> I've bisected this to "locking/rwsem: Check for active lock before bailing on spinning". Relevant parties Cc'ed.
That would be:
1a99367023f6 ("locking/rwsem: Check for active lock before bailing on spinning")
attached below.
Thanks,
Ingo
===========================>
>From 1a99367023f6ac664365a37fa508b059e31d0e88 Mon Sep 17 00:00:00 2001
From: Davidlohr Bueso <dave@...olabs.net>
Date: Fri, 30 Jan 2015 01:14:27 -0800
Subject: [PATCH] locking/rwsem: Check for active lock before bailing on spinning
37e9562453b ("locking/rwsem: Allow conservative optimistic
spinning when readers have lock") forced the default for
optimistic spinning to be disabled if the lock owner was
nil, which makes much sense for readers. However, while
it is not our priority, we can make some optimizations
for write-mostly workloads. We can bail the spinning step
and still be conservative if there are any active tasks,
otherwise there's really no reason not to spin, as the
semaphore is most likely unlocked.
This patch recovers most of a Unixbench 'execl' benchmark
throughput by sleeping less and making better average system
usage:
before:
CPU %user %nice %system %iowait %steal %idle
all 0.60 0.00 8.02 0.00 0.00 91.38
after:
CPU %user %nice %system %iowait %steal %idle
all 1.22 0.00 70.18 0.00 0.00 28.60
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Jason Low <jason.low2@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Michel Lespinasse <walken@...gle.com>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>
Link: http://lkml.kernel.org/r/1422609267-15102-6-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/rwsem-xadd.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 1c0d11e8ce34..e4ad019e23f5 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -298,23 +298,30 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *owner;
- bool on_cpu = false;
+ bool ret = true;
if (need_resched())
return false;
rcu_read_lock();
owner = ACCESS_ONCE(sem->owner);
- if (owner)
- on_cpu = owner->on_cpu;
- rcu_read_unlock();
+ if (!owner) {
+ long count = ACCESS_ONCE(sem->count);
+ /*
+ * If sem->owner is not set, yet we have just recently entered the
+ * slowpath with the lock being active, then there is a possibility
+ * reader(s) may have the lock. To be safe, bail spinning in these
+ * situations.
+ */
+ if (count & RWSEM_ACTIVE_MASK)
+ ret = false;
+ goto done;
+ }
- /*
- * If sem->owner is not set, yet we have just recently entered the
- * slowpath, then there is a possibility reader(s) may have the lock.
- * To be safe, avoid spinning in these situations.
- */
- return on_cpu;
+ ret = owner->on_cpu;
+done:
+ rcu_read_unlock();
+ return ret;
}
static inline bool owner_running(struct rw_semaphore *sem,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists