[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151105015903.GA14609@linux.vnet.ibm.com>
Date: Wed, 4 Nov 2015 17:59:03 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Dave Jones <davej@...emonkey.org.uk>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Stephane Eranian <eranian@...il.com>
Subject: Re: perf related lockdep bug
On Wed, Nov 04, 2015 at 04:55:28PM -0800, Paul E. McKenney wrote:
> On Wed, Nov 04, 2015 at 09:58:36PM +0100, Andi Kleen wrote:
> >
> > I tested my perf stress workload with the patch applied on 4.3,
> > unfortunately got a hang again :-/
>
> Any diagnostics, sysrq-T output, or whatever?
Given that it looks like your hang is happening at runtime, I would guess
that the following patch won't help, but who knows?
Thanx, Paul
------------------------------------------------------------------------
commit 05faf451f1239a28fcd63bf4b66c0db57d7b13f9
Author: Peter Zijlstra <peterz@...radead.org>
Date: Wed Nov 4 08:22:05 2015 -0800
rcu: Move wakeup out from under rnp->lock
This patch removes a potential deadlock hazard by moving the
wake_up_process() in rcu_spawn_gp_kthread() out from under rnp->lock.
Signed-off-by: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index caf3651fa5c9..183445959d00 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4323,8 +4323,8 @@ static int __init rcu_spawn_gp_kthread(void)
sp.sched_priority = kthread_prio;
sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
}
- wake_up_process(t);
raw_spin_unlock_irqrestore(&rnp->lock, flags);
+ wake_up_process(t);
}
rcu_spawn_nocb_kthreads();
rcu_spawn_boost_kthreads();
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists