lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Dec 2019 10:59:08 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Tejun Heo <tj@...nel.org>, jiangshanlai@...il.com,
        linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: Workqueues splat due to ending up on wrong CPU

On Fri, Dec 06, 2019 at 02:00:20PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 06, 2019 at 10:52:08AM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 05, 2019 at 06:48:05AM -0800, Paul E. McKenney wrote:
> > > On Thu, Dec 05, 2019 at 11:32:13AM +0100, Peter Zijlstra wrote:
> > > > On Thu, Dec 05, 2019 at 11:29:28AM +0100, Peter Zijlstra wrote:
> > > > > On Wed, Dec 04, 2019 at 12:11:50PM -0800, Paul E. McKenney wrote:
> > > > > 
> > > > > > And the good news is that I didn't see the workqueue splat, though my
> > > > > > best guess is that I had about a 13% chance of not seeing it due to
> > > > > > random chance (and I am currently trying an idea that I hope will make
> > > > > > it more probable).  But I did get a couple of new complaints about RCU
> > > > > > being used illegally from an offline CPU.  Splats below.
> > > > > 
> > > > > Shiny!
> > > 
> > > And my attempt to speed things up did succeed, but the success was limited
> > > to finding more places where rcutorture chokes on CPUs being slow to boot.
> > > Fixing those and trying again...
> > 
> > And I finally did manage to get a clean run.  There are probably a few
> > more things that a large slow-booting hyperthreaded system can do to
> > confuse rcutorture, but it is at least down to a dull roar.
> > 
> > > > > > Your patch did rearrange the CPU-online sequence, so let's see if I
> > > > > > can piece things together...
> > > > > > 
> > > > > > RCU considers a CPU to be online at rcu_cpu_starting() time.  This is
> > > > > > called from notify_cpu_starting(), which is called from the arch-specific
> > > > > > CPU-bringup code.  Any RCU readers before rcu_cpu_starting() will trigger
> > > > > > the warning I am seeing.
> > > > > 
> > > > > Right.
> > > > > 
> > > > > > The original location of the stop_machine_unpark() was in
> > > > > > bringup_wait_for_ap(), which is called from bringup_cpu(), which is in
> > > > > > the CPUHP_BRINGUP_CPU entry of cpuhp_hp_states[].  Which, if I am not
> > > > > > too confused, is invoked by some CPU other than the to-be-incoming CPU.
> > > > > 
> > > > > Correct.
> > > > > 
> > > > > > The new location of the stop_machine_unpark() is in cpuhp_online_idle(),
> > > > > > which is called from cpu_startup_entry(), which is invoked from
> > > > > > the arch-specific bringup code that runs on the incoming CPU.
> > > > > 
> > > > > The new place is the final piece of bringup, it is right before where
> > > > > the freshly woken CPU will drop into the idle loop and start scheduling
> > > > > (for the first time).
> > > > > 
> > > > > > Which
> > > > > > is the same code that invokes notify_cpu_starting(), so we need
> > > > > > notify_cpu_starting() to be invoked before cpu_startup_entry().
> > > > > 
> > > > > Right, that is right before we run what used to be the CPU_STARTING
> > > > > notifiers. This is in fact (on x86) before the CPU is marked
> > > > > cpu_online(). It has to be before cpu_startup_entry(), before this is
> > > > > ran with IRQs disabled, while cpu_startup_entry() demands IRQs are
> > > > > enabled.
> > > > > 
> > > > > > The order is not immediately obvious on IA64.  But it looks like
> > > > > > everything else does it in the required order, so I am a bit confused
> > > > > > about this.
> > > > > 
> > > > > That makes two of us, afaict we have RCU up and running when we get to
> > > > > the idle loop.
> > > > 
> > > > Or did we need rcutree_online_cpu() to have ran? Because that is ran
> > > > much later than this...
> > > 
> > > No, rcu_cpu_starting() does the trick.  So I remain confused.
> > > 
> > > My thought is to add some printk()s or tracing to rcu_cpu_starting()
> > > and its counterpart, rcu_report_dead().  But is there a better way?
> > 
> > And the answer is...
> > 
> > This splat happens even without your fix!
> > 
> > Which goes a long way to explaining why neither of us could figure out
> > how your fix could have caused it.  It apparently was the increased
> > stress required to reproduce quickly rather than your fix that made it
> > happen more frequently.  Though there are few enough occurrences that
> > it might just be random chance.
> > 
> > Thoughts?
> 
> I now have 12 of these, and my best guess is that this is happening
> from kvm_guest_cpu_init() when it prints "KVM setup async PF for cpu",
> given that this message is always either the line immediately
> following the splat or the one after that.  So let's see if I can
> connect the dots between kvm_guest_cpu_init() and start_secondary().
> The "? slow_virt_to_phys()" makes sense, as it is invoked by
> kvm_guest_cpu_init() just before the suspect printk().
> 
> kvm_guest_cpu_init() is invoked by kvm_smp_prepare_boot_cpu(),
> kvm_cpu_online(), and kvm_guest_init().  Since the boot CPU came
> up long ago and since rcutorture CPU hotplug should be on the job
> at the time of all of these splats, I am guessing kvm_cpu_online().
> But kvm_cpu_online() is invoked by kvm_guest_init(), so all non-boot-CPU
> roads lead to kvm_guest_init() anyway.
> 
> But kvm_guest_init() is an postcore_initcall() function.
> It is also placed in x86_hyper_kvm.init.guest_late_init().
> The postcore_initcall() looks unconditional, but does not appear in
> dmesg.  Besides which, at the time of the splat, boot is working on
> late_initcall()s rather than postcore_initcalls().  So let's look at
> x86_hyper_kvm.init.guest_late_init(), which is invoked in setup_arch(),
> which is in turn invoked way early in boot, before rcu_init().
> 
> So neither seem to make much sense here.
> 
> On the other hand, rcutorture's exercising of CPU hotplug before init
> has been spawned might not make the most sense, either.  So I will queue
> a patch that makes rcutorture hold off on the hotplug until boot is a
> bit further along.
> 
> And then hammer this a bit over the weekend, this time with Peter's
> alleged fix.  ;-)

And it survived!  ;-)

Peter, could I please have your Signed-off-by?  Or take my Tested-by if
you would prefer to send it up some other way.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ