[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190809183957.GH28441@linux.ibm.com>
Date: Fri, 9 Aug 2019 11:39:57 -0700
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Peter Zijlstra <peterz@...radead.org>, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com
Subject: Re: [PATCH RFC tip/core/rcu 14/14] rcu/nohz: Make multi_cpu_stop()
enable tick on all online CPUs
On Fri, Aug 09, 2019 at 02:07:21PM -0400, Joel Fernandes wrote:
> On Fri, Aug 09, 2019 at 09:51:20AM -0700, Paul E. McKenney wrote:
> > > > > And of course I forgot to dump out the online CPUs, so I really had no
> > > > > idea whether or not all the CPUs were accounted for. I added tracing
> > > > > to dump out the online CPUs at the beginning of __stop_cpus() and then
> > > > > reworked it a few times to get the problem to happen in reasonable time.
> > > > > Please see below for the resulting annotated trace.
> > > > >
> > > > > I was primed to expect a lost IPI, perhaps due to yet another qemu bug,
> > > > > but all the migration threads are running within about 2 milliseconds.
> > > > > It is then almost two minutes(!) until the next trace message.
> > > > >
> > > > > Looks like time to (very carefully!) instrument multi_cpu_stop().
> > > > >
> > > > > Of course, if you have any ideas, please do not keep them a secret!
> > > >
> > > > Functionally, multi_cpu_stop() is working fine, according to the trace
> > > > below (search for a line beginning with TAB). But somehow CPU 2 took
> > > > almost three -minutes- to do one iteration of the loop. The prime suspect
> > > > in that loop is cpu_relax() due to the hypervisor having an opportunity
> > > > to do something at that point. The commentary below (again, search for
> > > > a line beginning with TAB) gives my analysis.
> > > >
> > > > Of course, if I am correct, it should be possible to catch cpu_relax()
> > > > in the act. That is the next step, give or take the Heisenbuggy nature
> > > > of this beast.
> > > >
> > > > Another thing for me to try is to run longer with !NO_HZ_FULL, just in
> > > > case the earlier runs just got lucky.
> > > >
> > > > Thoughts?
> > >
> > > And it really can happen:
> > >
> > > [ 1881.467922] migratio-33 4...1 1879530317us : stop_machine_yield: cpu_relax() took 756140 ms
> > >
> > > The previous timestamp was 1123391100us, so the cpu_relax() is almost
> > > exactly the full delay.
> > >
> > > But another instance stalled for many minutes without a ten-second
> > > cpu_relax(). So it is not just cpu_relax() causing trouble. I could
> > > rationalize that vCPU preemption being at fault...
> > >
> > > And my diagnostic patch is below, just in case I am doing something
> > > stupid with that.
> >
> > I did a 12-hour run with the same configuration except for leaving out the
> > "nohz_full=1-7" kernel parameter without problems (aside from the RCU CPU
> > stall warnings due to the ftrace_dump() at the end of the run -- isn't
> > there some way to clear the ftrace buffer without actually printing it?).
>
> I think if tracing_reset_all_online_cpus() is exposed, then calling that with
> the appropriate locks held can reset the ftrace ring buffer and clear the
> trace. Steve, is there a better way?
On the off-chance that it helps, here is my use case:
o I have a race condition that becomes extremely unlikely if
I leave tracing enabled at all times.
o I therefore enable tracing at the beginning of a CPU-hotplug
removal.
o At the end of that CPU-hotplug removal, I disable tracing.
o I can test whether the problem occurred without affecting its
probability. If it occurred, I want to dump only that portion
of the trace buffer since enabling it above. If the problem
did not occur, I want to flush (not dump) the trace buffer.
o I cannot reasonably just make the trace buffer small because
the number of events in a given occurrence of the problem
can vary widely.
Thanx, Paul
Powered by blists - more mailing lists