lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190809165120.GA5668@linux.ibm.com>
Date:   Fri, 9 Aug 2019 09:51:20 -0700
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH RFC tip/core/rcu 14/14] rcu/nohz: Make multi_cpu_stop()
 enable tick on all online CPUs

On Thu, Aug 08, 2019 at 02:30:12PM -0700, Paul E. McKenney wrote:
> On Thu, Aug 08, 2019 at 01:35:41PM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 07, 2019 at 02:41:31PM -0700, Paul E. McKenney wrote:
> > > On Tue, Aug 06, 2019 at 11:08:24AM -0700, Paul E. McKenney wrote:
> > > > On Mon, Aug 05, 2019 at 10:48:00AM -0700, Paul E. McKenney wrote:
> > > > > On Mon, Aug 05, 2019 at 05:50:24PM +0200, Peter Zijlstra wrote:
> > > > > > On Mon, Aug 05, 2019 at 07:54:48AM -0700, Paul E. McKenney wrote:
> > > > > > 
> > > > > > > > Right; so clearly we're not understanding what's happening. That seems
> > > > > > > > like a requirement for actually doing a patch.
> > > > > > > 
> > > > > > > Almost but not quite.  It is a requirement for a patch *that* *is*
> > > > > > > *supposed* *to* *be* *a* *fix*.  If you are trying to prohibit me from
> > > > > > > writing experimental patches, please feel free to take a long walk on
> > > > > > > a short pier.
> > > > > > > 
> > > > > > > Understood???
> > > > > > 
> > > > > > Ah, my bad, I thought you were actually proposing this as an actual
> > > > > > patch. I now see that is my bad, I'd overlooked the RFC part.
> > > > > 
> > > > > No problem!
> > > > > 
> > > > > And of course adding tracing decreases the frequency and duration of
> > > > > the multi_cpu_stop().  Re-running with shorter-duration triggering.  ;-)
> > > > 
> > > > And I did eventually get a good trace.  If I am interpreting this trace
> > > > correctly, the torture_-135 task didn't get around to attempting to wake
> > > > up all of the CPUs.  I will try again, but this time with the sched_switch
> > > > trace event enabled.
> > > > 
> > > > As a side note, enabling ftrace from the command line seems to interact
> > > > badly with turning tracing off and on in the kernel, so I eventually
> > > > resorted to trace_printk() in the functions of interest.  The trace
> > > > output is below, followed by the current diagnostic patch.  Please note
> > > > that I am -not- using the desperation hammer-the-scheduler patches.
> > > > 
> > > > More as I learn more!
> > > 
> > > And of course I forgot to dump out the online CPUs, so I really had no
> > > idea whether or not all the CPUs were accounted for.  I added tracing
> > > to dump out the online CPUs at the beginning of __stop_cpus() and then
> > > reworked it a few times to get the problem to happen in reasonable time.
> > > Please see below for the resulting annotated trace.
> > > 
> > > I was primed to expect a lost IPI, perhaps due to yet another qemu bug,
> > > but all the migration threads are running within about 2 milliseconds.
> > > It is then almost two minutes(!) until the next trace message.
> > > 
> > > Looks like time to (very carefully!) instrument multi_cpu_stop().
> > > 
> > > Of course, if you have any ideas, please do not keep them a secret!
> > 
> > Functionally, multi_cpu_stop() is working fine, according to the trace
> > below (search for a line beginning with TAB).  But somehow CPU 2 took
> > almost three -minutes- to do one iteration of the loop.  The prime suspect
> > in that loop is cpu_relax() due to the hypervisor having an opportunity
> > to do something at that point.  The commentary below (again, search for
> > a line beginning with TAB) gives my analysis.
> > 
> > Of course, if I am correct, it should be possible to catch cpu_relax()
> > in the act.  That is the next step, give or take the Heisenbuggy nature
> > of this beast.
> > 
> > Another thing for me to try is to run longer with !NO_HZ_FULL, just in
> > case the earlier runs just got lucky.
> > 
> > Thoughts?
> 
> And it really can happen:
> 
> [ 1881.467922] migratio-33      4...1 1879530317us : stop_machine_yield: cpu_relax() took 756140 ms
> 
> The previous timestamp was 1123391100us, so the cpu_relax() is almost
> exactly the full delay.
> 
> But another instance stalled for many minutes without a ten-second
> cpu_relax().  So it is not just cpu_relax() causing trouble.  I could
> rationalize that vCPU preemption being at fault...
> 
> And my diagnostic patch is below, just in case I am doing something
> stupid with that.

I did a 12-hour run with the same configuration except for leaving out the
"nohz_full=1-7" kernel parameter without problems (aside from the RCU CPU
stall warnings due to the ftrace_dump() at the end of the run -- isn't
there some way to clear the ftrace buffer without actually printing it?).

My next step is to do an over-the-weekend run with the same configuration,
then a similar run with more recent kernel and qemu but with the
"nohz_full=1-7".  If both of those pass, I will consider this to be a
KVM/qemu bug that has since been fixed.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ