[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140827162113.GA2663@linux.vnet.ibm.com>
Date: Wed, 27 Aug 2014 09:21:13 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Amit Shah <amit.shah@...hat.com>
Cc: Pranith Kumar <pranith@...ech.edu>,
LKML <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Lai Jiangshan <laijs@...fujitsu.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>, dvhart@...ux.intel.com,
Frédéric Weisbecker <fweisbec@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Silas Boyd-Wickizer <sbw@....edu>
Subject: Re: [PATCH tip/core/rcu 1/2] rcu: Parallelize and economize NOCB
kthread wakeups
On Wed, Aug 27, 2014 at 10:13:50AM +0530, Amit Shah wrote:
> On (Sat) 23 Aug 2014 [03:43:38], Pranith Kumar wrote:
> > On Fri, Aug 22, 2014 at 5:53 PM, Paul E. McKenney
> > <paulmck@...ux.vnet.ibm.com> wrote:
> > >
> > > Hmmm... Please try replacing the synchronize_rcu() in
> > > __sysrq_swap_key_ops() with (say) schedule_timeout_interruptible(HZ / 10).
> > > I bet that gets rid of the hang. (And also introduces a low-probability
> > > bug, but should be OK for testing.)
> > >
> > > The other thing to try is to revert your patch that turned my event
> > > traces into printk()s, then put an ftrace_dump(DUMP_ALL); just after
> > > the synchronize_rcu() -- that might make it so that the ftrace data
> > > actually gets dumped out.
> > >
> >
> > I was able to reproduce this error on my Ubuntu 14.04 machine. I think
> > I found the root cause of the problem after several kvm runs.
> >
> > The problem is that earlier we were waiting on nocb_head and now we
> > are waiting on nocb_leader_wake.
> >
> > So there are a lot of nocb callbacks which are enqueued before the
> > nocb thread is spawned. This sets up nocb_head to be non-null, because
> > of which the nocb kthread used to wake up immediately after sleeping.
> >
> > Now that we have switched to nocb_leader_wake, this is not being set
> > when there are pending callbacks, unless the callbacks overflow the
> > qhimark. The pending callbacks were around 7000 when the boot hangs.
> >
> > So setting the qhimark using the boot parameter rcutree.qhimark=5000
> > is one way to allow us to boot past the point by forcefully waking up
> > the nocb kthread. I am not sure this is fool-proof.
> >
> > Another option to start the nocb kthreads with nocb_leader_wake set,
> > so that it can handle any pending callbacks. The following patch also
> > allows us to boot properly.
> >
> > Phew! Let me know if this makes any sense :)
> >
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 00dc411..4c397aa 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -2386,6 +2386,9 @@ static int rcu_nocb_kthread(void *arg)
> > struct rcu_head **tail;
> > struct rcu_data *rdp = arg;
> >
> > + if (rdp->nocb_leader == rdp)
> > + rdp->nocb_leader_wake = true;
> > +
> > /* Each pass through this loop invokes one batch of callbacks */
> > for (;;) {
> > /* Wait for callbacks. */
>
> Yes, this patch helps my case as well.
Very good!!!
Pranith, I can take this patch, but would you be willing to invert
the sense of ->nocb_leader_wake (e.g., call it ->nocb_leader_sleep or
some such)? This field is only used in eight places in the source code.
The idea is that inverting the sense of the field allows the normal C
initialization of zero to properly initialize this field, plus it gets
rid of a few lines of code.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists