[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191202233944.GY2889@paulmck-ThinkPad-P72>
Date: Mon, 2 Dec 2019 15:39:44 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Tejun Heo <tj@...nel.org>
Cc: jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: Workqueues splat due to ending up on wrong CPU
On Mon, Dec 02, 2019 at 12:13:38PM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> (cc'ing scheduler folks - workqueue rescuer is very occassionally
> triggering a warning which says that it isn't on the cpu it should be
> on under rcu cpu hotplug torture test. It's checking smp_processor_id
> is the expected one after a successful set_cpus_allowed_ptr() call.)
>
> On Sun, Dec 01, 2019 at 05:55:48PM -0800, Paul E. McKenney wrote:
> > > And hyperthreading seems to have done the trick! One splat thus far,
> > > shown below. The run should complete this evening, Pacific Time.
> >
> > That was the only one for that run, but another 24*56-hour run got three
> > more. All of them expected to be on CPU 0 (which never goes offline, so
> > why?) and the "XXX" diagnostic never did print.
>
> Heh, I didn't expect that, so maybe set_cpus_allowed_ptr() is
> returning 0 while not migrating the rescuer task to the target cpu for
> some reason?
>
> The rescuer is always calling to migrate itself, so it must always be
> running. set_cpus_allowed_ptr() migrates live ones by calling
> stop_one_cpu() which schedules a migration function which runs from a
> highpri task on the target cpu. Please take a look at the following.
>
> static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
> {
> ...
> enabled = stopper->enabled;
> if (enabled)
> __cpu_stop_queue_work(stopper, work, &wakeq);
> else if (work->done)
> cpu_stop_signal_done(work->done);
> ...
> }
>
> So, if stopper->enabled is clear, it'll signal completion without
> running the work. stopper->enabled is cleared during cpu hotunplug
> and restored from bringup_cpu() while cpu is being brought back up.
>
> static int bringup_wait_for_ap(unsigned int cpu)
> {
> ...
> stop_machine_unpark(cpu);
> ....
> }
>
> static int bringup_cpu(unsigned int cpu)
> {
> ...
> ret = __cpu_up(cpu, idle);
> ...
> return bringup_wait_for_ap(cpu);
> }
>
> __cpu_up() is what marks the cpu online and once the cpu is online,
> kthreads are free to migrate into the cpu, so it looks like there's a
> brief window where a cpu is marked online but the stopper thread is
> still disabled meaning that a kthread may schedule into the cpu but
> not out of it, which would explain the symptom that you were seeing.
>
> This makes the cpumask and the cpu the task is actually on disagree
> and retries would become noops. I can work around it by excluding
> rescuer attachments against hotplugs but this looks like a genuine cpu
> hotplug bug.
>
> It could be that I'm misreading the code. What do you guys think?
I think that I do not understand the code, but I never let that stop
me from asking stupid questions! ;-)
Suppose that a given worker is bound to a particular CPU, but has no
work pending, and is therefore sleeping in the schedule() call near the
end of worker_thread(). During this time, its CPU goes offline and then
comes back online. Doesn't this break that task's affinity to that CPU?
Then the call to workqueue_online_cpu() is supposed to rebind all the
tasks that might have been affected, correct?
I could imagine putting a trace_printk() or two in workqueue_online_cpu()
and adding the task_struct pointer (or PID) to the WARN_ONCE(), though I
am worried that this might decrease the race probability.
Is there a better way to proceed?
Thanx, Paul
Powered by blists - more mailing lists