[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2eitcrea4.fsf@pobox.com>
Date: Mon, 22 Jun 2009 01:08:51 -0500
From: Nathan Lynch <ntl@...ox.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
svaidy@...ux.vnet.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
Gautham R Shenoy <ego@...ibm.com>,
linux-kernel@...r.kernel.org, Balbir Singh <balbir@...ibm.com>,
Rusty Russel <rusty@...tcorp.com.au>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Shoahua Li <shaohua.li@...ux.com>
Subject: Re: [RFD PATCH 0/4] cpu: Bulk CPU Hotplug support.
Ingo Molnar <mingo@...e.hu> writes:
> * Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
>
>> On Wed, 2009-06-17 at 17:07 +0200, Ingo Molnar wrote:
>> > * Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
>> >
>> > > On Wed, Jun 17, 2009 at 09:32:57AM +0200, Peter Zijlstra wrote:
>> > > > On Tue, 2009-06-16 at 13:37 +0530, Vaidyanathan Srinivasan wrote:
>> > > > > * Andrew Morton <akpm@...ux-foundation.org> [2009-06-15 23:23:18]:
>> > > > >
>> > > > > > On Tue, 16 Jun 2009 11:08:39 +0530 Gautham R Shenoy <ego@...ibm.com> wrote:
>> > > > > >
>> > > > > > > Currently on a ppc64 box with 16 CPUs, the time taken for
>> > > > > > > a individual cpu-hotplug operation is as follows.
>> > > > > > >
>> > > > > > > # time echo 0 > /sys/devices/system/cpu/cpu2/online
>> > > > > > > real 0m0.025s
>> > > > > > > user 0m0.000s
>> > > > > > > sys 0m0.002s
>> > > > > > >
>> > > > > > > # time echo 1 > /sys/devices/system/cpu/cpu2/online
>> > > > > > > real 0m0.021s
>> > > > > > > user 0m0.000s
>> > > > > > > sys 0m0.000s
>> > > > > >
>> > > > > > Surprised. Do people really online and offline CPUs frequently enough
>> > > > > > for this to be a problem?
>> > > > >
>> > > > > Certainly not for hardware faults or hardware replacement, but
>> > > > > cpu-hotplug interface is useful for changing system configuration to
>> > > > > meet different objectives like
>> > > > >
>> > > > > * Reduce system capacity to reduce average power and reduce heat
>> > > > >
>> > > > > * Increasing number of cores and threads in a CPU package is leading
>> > > > > to multiple cpu offline/online operations for any perceivable effect
>> > > > >
>> > > > > * Dynamically change CPU configurations in virtualized environments
>> > > >
>> > > > I tend to agree with Andrew, if any of those things are done
>> > > > frequent enough that the hotplug performance matter you're doing
>> > > > something mighty odd.
>> > >
>> > > Boot speedup?
>> >
>> > Also, if it brings more attention (and more stability and more
>> > bugfixes) to CPU hotplug that's only good.
>>
>> Sure, but do we need the extra complexity?
>>
>> I mean, sure bootup speed might be nice, but any of the scenarios
>> given should simply not require cpu hotplug actions of a frequent
>> enough nature that any performance matters.
>
> Well, the fact that the patches exist show that there's people
> caring about the speedup here. The speedup itself is non-trivial.
If I correctly understand the behavior of the patch set as posted, there
is no speedup beyond eliminating the overhead of multiple writes to
/sys/devices/system/cpu/cpu*/online. To obtain non-trivial speedups via
bulk hotplug, one or both of the following items from the TODO list need
to be completed:
- Enhance the subsystem notifiers to work on a cpumask_var_t instead of a cpu
id.
- Optimize the subsystem notifiers to reduce the time consumed while
handling CPU_[DOWN_PREPARE/DEAD/UP_PREPARE/ONLINE] events for the
cpumask_var_t.
Right?
(The powerpc-specific patch at the beginning of the series improves
hot-online time for a single cpu in some circumstances and is basically
unrelated to the aim of the patch set -- it should go upstream through
the powerpc tree independently.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists