[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52a34ed365cd560457e9abf5877c5b37@codeaurora.org>
Date: Wed, 01 Aug 2018 01:07:03 -0700
From: Sodagudi Prasad <psodagud@...eaurora.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
isaacm@...eaurora.org, matt@...eblueprint.co.uk, mingo@...nel.org,
linux-kernel@...r.kernel.org, gregkh@...uxfoundation.org,
pkondeti@...eaurora.org, stable@...r.kernel.org
Subject: Re: [PATCH] stop_machine: Disable preemption after queueing stopper
threads
On 2018-07-30 14:07, Peter Zijlstra wrote:
> On Mon, Jul 30, 2018 at 10:12:43AM -0700, Sodagudi Prasad wrote:
>> How about including below change as well? Currently, there is no way
>> to
>> identify thread migrations completed or not. When we observe this
>> issue,
>> the symptom was work queue lock up. It is better to have some timeout
>> here
>> and induce the bug_on.
>
> You'd trigger the soft-lockup or hung-task detector I think. And if
> not,
> we ought to look at making it trigger at least one of those.
>
>> There is no way to identify the migration threads stuck or not.
>
> Should be pretty obvious from the splat generated by the above, no?
Hi Peter and Thomas,
Thanks for your support.
I have another question on this flow and retry mechanism used in this
cpu_stop_queue_two_works() function using the global variable
stop_cpus_in_progress.
This variable is getting used in various paths, such as task migration,
set task affinity, and CPU hotplug.
For example cpu hotplug path, stop_cpus_in_progress variable getting set
with true with out checking.
takedown_cpu()
--stop_machine_cpuslocked()
---stop_cpus()
---__stop_cpus()
----queue_stop_cpus_work()
setting stop_cpus_in_progress to true directly.
But in the task migration path only, the stop_cpus_in_progress variable
is used for retry.
I am thinking that stop_cpus_in_progress variable lead race conditions,
where CPU hotplug and task migration happening simultaneously. Please
correct me If my understanding wrong.
-Thanks, Prasad
>
>> --- a/kernel/stop_machine.c
>> +++ b/kernel/stop_machine.c
>> @@ -290,6 +290,7 @@ int stop_two_cpus(unsigned int cpu1, unsigned int
>> cpu2,
>> cpu_stop_fn_t fn, void *
>> struct cpu_stop_done done;
>> struct cpu_stop_work work1, work2;
>> struct multi_stop_data msdata;
>> + int ret;
>>
>> msdata = (struct multi_stop_data){
>> .fn = fn,
>> @@ -312,7 +313,10 @@ int stop_two_cpus(unsigned int cpu1, unsigned int
>> cpu2,
>> cpu_stop_fn_t fn, void *
>> if (cpu_stop_queue_two_works(cpu1, &work1, cpu2, &work2))
>> return -ENOENT;
>>
>> - wait_for_completion(&done.completion);
>> + ret = wait_for_completion_timeout(&done.completion,
>> msecs_to_jiffies(1000));
>> + if (!ret)
>> + BUG_ON(1);
>> +
>
> That's a random timeout, which if you spuriously trigger it, will take
> down your machine. That seems like a cure worse than the disease.
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
Linux Foundation Collaborative Project
Powered by blists - more mailing lists