[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140823165121.GJ2663@linux.vnet.ibm.com>
Date: Sat, 23 Aug 2014 09:51:22 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Pranith Kumar <pranith@...ech.edu>
Cc: Amit Shah <amit.shah@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Lai Jiangshan <laijs@...fujitsu.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>, dvhart@...ux.intel.com,
Frédéric Weisbecker <fweisbec@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Silas Boyd-Wickizer <sbw@....edu>
Subject: Re: [PATCH tip/core/rcu 1/2] rcu: Parallelize and economize NOCB
kthread wakeups
On Sat, Aug 23, 2014 at 03:43:38AM -0400, Pranith Kumar wrote:
> On Fri, Aug 22, 2014 at 5:53 PM, Paul E. McKenney
> <paulmck@...ux.vnet.ibm.com> wrote:
> >
> > Hmmm... Please try replacing the synchronize_rcu() in
> > __sysrq_swap_key_ops() with (say) schedule_timeout_interruptible(HZ / 10).
> > I bet that gets rid of the hang. (And also introduces a low-probability
> > bug, but should be OK for testing.)
> >
> > The other thing to try is to revert your patch that turned my event
> > traces into printk()s, then put an ftrace_dump(DUMP_ALL); just after
> > the synchronize_rcu() -- that might make it so that the ftrace data
> > actually gets dumped out.
> >
>
> I was able to reproduce this error on my Ubuntu 14.04 machine. I think
> I found the root cause of the problem after several kvm runs.
>
> The problem is that earlier we were waiting on nocb_head and now we
> are waiting on nocb_leader_wake.
>
> So there are a lot of nocb callbacks which are enqueued before the
> nocb thread is spawned. This sets up nocb_head to be non-null, because
> of which the nocb kthread used to wake up immediately after sleeping.
>
> Now that we have switched to nocb_leader_wake, this is not being set
> when there are pending callbacks, unless the callbacks overflow the
> qhimark. The pending callbacks were around 7000 when the boot hangs.
>
> So setting the qhimark using the boot parameter rcutree.qhimark=5000
> is one way to allow us to boot past the point by forcefully waking up
> the nocb kthread. I am not sure this is fool-proof.
Unfortunately, not in all cases. A small kernel for embedded use might
register only a few callbacks during boot, which could still result
in a hang.
> Another option to start the nocb kthreads with nocb_leader_wake set,
> so that it can handle any pending callbacks. The following patch also
> allows us to boot properly.
This seems like a much better approach.
> Phew! Let me know if this makes any sense :)
It might well! Another possibility is that the early_initcall function
doing the synchronize_rcu() is happening before the early_initcall
creating the RCU grace-period kthreads.
Seems like we need to close both holes. Let's see how your patch works
for Amit, and I am testing a patch for the possible early_initcall
ordering issue.
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 00dc411..4c397aa 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -2386,6 +2386,9 @@ static int rcu_nocb_kthread(void *arg)
> struct rcu_head **tail;
> struct rcu_data *rdp = arg;
>
> + if (rdp->nocb_leader == rdp)
> + rdp->nocb_leader_wake = true;
> +
Not that it matters all that much, but given that the followers don't
ever reference ->nocb_leader_wake, we should be able to set this flag
unconditionally.
Thanx, Paul
> /* Each pass through this loop invokes one batch of callbacks */
> for (;;) {
> /* Wait for callbacks. */
>
>
> --
> Pranith
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists