[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120405173918.GC8194@linux.vnet.ibm.com>
Date: Thu, 5 Apr 2012 10:39:18 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arjan van de Ven <arjan@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
"rusty@...tcorp.com.au" <rusty@...tcorp.com.au>,
"Rafael J. Wysocki" <rjw@...k.pl>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Milton Miller <miltonm@....com>,
"mingo@...e.hu" <mingo@...e.hu>, Tejun Heo <tj@...nel.org>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Linux PM mailing list <linux-pm@...r.kernel.org>
Subject: Re: CPU Hotplug rework
On Mon, Mar 19, 2012 at 08:18:42PM +0530, Srivatsa S. Bhat wrote:
> On 03/19/2012 08:14 PM, Srivatsa S. Bhat wrote:
>
> > Hi,
> >
> > There had been some discussion on CPU Hotplug redesign/rework
> > some time ago, but it was buried under a thread with a different
> > subject.
> > (http://thread.gmane.org/gmane.linux.kernel/1246208/focus=1246404)
> >
> > So I am opening a new thread with an appropriate subject to discuss
> > what needs to be done and how to go about it, as part of the rework.
> >
> > Peter Zijlstra and Paul McKenney had come up with TODO lists for the
> > rework, and here are their extracts from the previous discussion:
Finally getting around to looking at this in more detail...
> Additional things that I would like to add to the list:
>
> 1. Fix issues with CPU Hotplug callback registration. Currently there
> is no totally-race-free way to register callbacks and do setup
> for already online cpus.
>
> I had posted an incomplete patchset some time ago regarding this,
> which gives an idea of the direction I had in mind.
> http://thread.gmane.org/gmane.linux.kernel/1258880/focus=15826
Another approach is to have the registration function return the
CPU mask corresponding to the instant at which registration occurred,
perhaps via an additional function argument that points to a
cpumask_var_t that can be NULL if you don't care. Then you can
do setup for the CPUs indicated in the mask.
Or am I missing the race you had in mind? Or is the point to make
sure that the notifiers execute in order?
> 2. There is a mismatch between the code and the documentation around
> the difference between [un/register]_hotcpu_notifier and
> [un/register]_cpu_notifier. And I remember seeing several places in
> the code that uses them inconsistently. Not terribly important, but
> good to fix it up while we are at it.
The following lead me to believe that they were the same:
#define register_hotcpu_notifier(nb) register_cpu_notifier(nb)
#define unregister_hotcpu_notifier(nb) unregister_cpu_notifier(nb)
What am I missing here?
> 3. There was another thread where stuff related to CPU hotplug had been
> discussed. It had exposed some new challenges to CPU hotplug, if we
> were to support asynchronous smp booting.
>
> http://thread.gmane.org/gmane.linux.kernel/1246209/focus=48535
> http://thread.gmane.org/gmane.linux.kernel/1246209/focus=48542
> http://thread.gmane.org/gmane.linux.kernel/1246209/focus=1253241
> http://thread.gmane.org/gmane.linux.kernel/1246209/focus=1253267
Good points! ;-)
> 4. Because the current CPU offline code depends on stop_machine(), every
> online CPU must cooperate with the offline event. This means, whenever
> we do a preempt_disable(), it ensures not only that that particular
> CPU won't go offline, but also that *any* CPU cannot go offline. This
> is more like a side-effect of using stop_machine().
>
> So when trying to move over to stop_one_cpu(), we have to carefully audit
> places where preempt_disable() has been used in that manner (ie.,
> preempt_disable used as a light-weight and non-blocking form of
> get_online_cpus()). Because when we move to stop_one_cpu() to do CPU offline,
> a preempt disabled section will prevent only that particular CPU from
> going offline.
>
> I haven't audited preempt_disable() calls yet, but one such use was there
> in brlocks (include/linux/lglock.h) until quite recently.
I was thinking in terms of the offline code path doing a synchronize_sched()
to allow preempt_disable() to retain a reasonable approximation of its
current semantics. This would require a pair of CPU masks, one for code
using CPU-based primitives (e.g., sending IPIs) and another for code
implementing those primitives.
Or is there some situation that makes this approach fail?
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists