[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51671A72.6070204@linux.vnet.ibm.com>
Date: Fri, 12 Apr 2013 01:47:54 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Russ Anderson <rja@....com>
CC: Paul Mackerras <paulus@...ba.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>, Robin Holt <holt@....com>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Shawn Guo <shawn.guo@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
the arch/x86 maintainers <x86@...nel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Tejun Heo <tj@...nel.org>, Oleg Nesterov <oleg@...hat.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Michel Lespinasse <walken@...gle.com>,
"rusty@...tcorp.com.au" <rusty@...tcorp.com.au>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: Bulk CPU Hotplug (Was Re: [PATCH] Do not force shutdown/reboot
to boot cpu.)
On 04/12/2013 01:38 AM, Russ Anderson wrote:
> On Thu, Apr 11, 2013 at 08:15:27PM +0530, Srivatsa S. Bhat wrote:
>> On 04/11/2013 07:53 PM, Russ Anderson wrote:
>>> On Thu, Apr 11, 2013 at 06:15:18PM +0530, Srivatsa S. Bhat wrote:
>>>>
>>>> One more thing we have to note is that, there are 4 notifiers for taking a
>>>> CPU offline:
>>>>
>>>> CPU_DOWN_PREPARE
>>>> CPU_DYING
>>>> CPU_DEAD
>>>> CPU_POST_DEAD
>>>>
>>>> The first can be run in parallel as mentioned above. The second is run in
>>>> parallel in the stop_machine() phase as shown in Russ' patch. But the third
>>>> and fourth set of notifications all end up running only on CPU0, which will
>>>> again slow down things.
>>>
>>> In my testing the third and fourth set were a small part of the overall
>>> time. Less than 10%, with cpu notifiers 90+% of the time.
>>
>> *All* of them are cpu notifiers! All of them invoke __cpu_notify() internally.
>> So how did you differentiate between them and find out that the third and
>> fourth sets take less time?
>
> I reran a test on a 1024 cpu system, using my test patch to only call
> __stop_machine() once. Added printks to show the kernel timestamp
> at various points.
>
> When calling disable_nonboot_cpus() and enable_nonboot_cpus() just after
> booting the system:
> The loop calling __cpu_notify(CPU_DOWN_PREPARE) took 376.6 seconds.
> The loop calling cpu_notify_nofail(CPU_DEAD) took 8.1 seconds.
>
> My guess is that notifiers do more work in the CPU_DOWN_PREPARE case.
>
> I also added a loop calling a new notifier (CPU_TEST) which none of
> notifiers would recognize, to measure the time it took to spin through
> the call chain without the notifiers doing any work. It took
> 0.0067 seconds.
>
> On the actual reboot, as the system was shutting down:
> The loop calling __cpu_notify(CPU_DOWN_PREPARE) took 333.8 seconds.
> The loop calling cpu_notify_nofail(CPU_DEAD) took 2.7 seconds.
>
> I don't know how many notifiers are on the chain, or if there is
> one heavy hitter accounting for much of the time in the
> CPU_DOWN_PREPARE case.
>
>
> FWIW, the overall cpu stop times are somewhat longer than what I
> measured before. Not sure if the difference is due to changes in
> my test patch, other kernel changes pulled in, or some difference
> on the test system.
>
>
Thanks a lot for reporting the time taken at each stage. Its extremely
useful. So, we can drop the idea of taking CPUs down in multiple rounds
like 512, 256 etc. And, like you mentioned earlier, just running the
CPU_DOWN_PREPARE notifiers in parallel (like we discussed earlier) should
give us all the performance improvement. Or perhaps, we can instrument
the code in kernel/notifier.c (notifier_call_chain) to find out if there
is a rogue notifier which contributes most to the ~300 seconds.
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists