[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54EEAFCF.4080208@linux.vnet.ibm.com>
Date: Thu, 26 Feb 2015 11:01:59 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Nicolas Pitre <nicolas.pitre@...aro.org>,
Peter Zijlstra <peterz@...radead.org>
CC: linux-kernel@...r.kernel.org, mingo@...nel.org, rjw@...ysocki.net,
tglx@...utronix.de
Subject: Re: [PATCH 32/35] clockevents: Fix cpu down race for hrtimer based
broadcasting
On 02/23/2015 11:03 PM, Nicolas Pitre wrote:
> On Mon, 23 Feb 2015, Nicolas Pitre wrote:
>
>> On Mon, 23 Feb 2015, Peter Zijlstra wrote:
>>
>>> The reported function that fails: bL_switcher_restore_cpus() is called
>>> in the error paths of the former and the main path in the latter to make
>>> the 'stolen' cpus re-appear.
>>>
>>> The patch in question somehow makes that go boom.
>>>
>>>
>>> Now what all do you need to do to make it go boom? Just enable/disable
>>> the switcher once and it'll explode? Or does it need to do actual
>>> switches while it is enabled?
>>
>> It gets automatically enabled during boot. Then several switches are
>> performed while user space is brought up. If I manually disable it
>> via /sys then it goes boom.
>
> OK. Forget the bL switcher. I configured it out of my kernel and then
> managed to get the same crash by simply hotplugging out one CPU and
> plugging it back in.
>
> $ echo 0 > /sys/devices/system/cpu/cpu2/online
> [CPU2 gone]
> $ echo 1 > /sys/devices/system/cpu/cpu2/online
> [Boom!]
>
>
I saw an issue with this patch as well. I tried to do an smt mode switch
on a power machine, i.e varying the number of hyperthreads on an SMT 8
system, and the system hangs. Worse, there are no softlockup
messages/warnings/bug_ons reported. I am digging into this issue.
A couple of points though. Looking at the patch, I see that we are
shutting down tick device of the hotplugged out cpu *much before*
migrating the timers and hrtimers from it. Migration of timers is done
in the CPU_DEAD phase, while we shutdown tick devices in the CPU_DYING
phase. There is quite a bit of a gap here. Earlier we would do both in a
single notification.
Another point is that the tick devices are shutdown before the
hotplugged out cpu actually dies in __cpu_die(). At first look none of
these two points should create any issues. But since we are noticing
problems with this patch, I thought it would be best to put them forth.
But why are tick devices being shutdown that early ? Is there any
specific advantage to this? Taking/handing over tick duties should be
done before __cpu_die(), but shutdown of tick devices should be done
after this phase. This seems more natural, doesn't it?
Regards
Preeti U Murthy
> Nicolas
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists