[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1208010907210.32033@ionos>
Date: Wed, 1 Aug 2012 09:10:09 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Rusty Russell <rusty@...tcorp.com.au>
cc: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
Alan Stern <stern@...land.harvard.edu>, mingo@...nel.org,
peterz@...radead.org, paulmck@...ux.vnet.ibm.com,
namhyung@...nel.org, tj@...nel.org, rjw@...k.pl,
nikunj@...ux.vnet.ibm.com, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/6] CPU hotplug: Reverse invocation of notifiers
during CPU hotplug
On Fri, 27 Jul 2012, Rusty Russell wrote:
> On Wed, 25 Jul 2012 18:30:41 +0200 (CEST), Thomas Gleixner <tglx@...utronix.de> wrote:
> > The problem with the current notifiers is, that we only have ordering
> > for a few specific callbacks, but we don't have the faintest idea in
> > which order all other random stuff is brought up and torn down.
> >
> > So I started experimenting with the following:
> >
> > struct hotplug_event {
> > int (*bring_up)(unsigned int cpu);
> > int (*tear_down)(unsigned int cpu);
> > };
> >
> > enum hotplug_events {
> > CPU_HOTPLUG_START,
> > CPU_HOTPLUG_CREATE_THREADS,
> > CPU_HOTPLUG_INIT_TIMERS,
> > ...
> > CPU_HOTPLUG_KICK_CPU,
> > ...
> > CPU_HOTPLUG_START_THREADS,
> > ...
> > CPU_HOTPLUG_SET_ONLINE,
> > ...
> > CPU_HOTPLUG_MAX_EVENTS,
> > };
>
> This looks awfully like hardcoded a list of calls, without the
> readability :)
I'd love to make it a list of calls, though we have module users of
cpu hotplug which makes a list of calls a tad hard.
> OK, I finally got off my ass and looked at the different users of cpu
> hotplug. Some are just doing crazy stuff, but most seem to fall into
> two types:
>
> 1) Hardware-style cpu callbacks (CPU_UP_PREPARE & CPU_DEAD)
> 2) Live cpu callbacks (CPU_ONLINE & CPU_DOWN_PREPARE)
>
> I think this is what Srivatsa was referring to with "physical" and
> "logical" parts. Maybe we should explicitly split them, with the idea
> that we'd automatically call the other one if we hit an error.
>
> struct cpu_hotplug_physical {
> int (*coming)(unsigned int cpu);
> void (*gone)(unsigned int cpu);
> };
>
> struct cpu_hotplug_logical {
> void (*arrived)(unsigned int cpu);
> int (*going)(unsigned int cpu);
> };
>
> Several of the live cpu callbacks seem racy to me, since we could be
> running userspace tasks before CPU_ONLINE. It'd be nice to fix this,
> too.
Yes, I know. I wan't to change that as well. The trick here is that we
can schedule per cpu stuff on a not fully online cpu and only when all
the callbacks have been executed allow user space tasks to be
scheduled on that newly online cpu.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists