[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200704091514.56167.rjw@sisk.pl>
Date: Mon, 9 Apr 2007 15:14:55 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Pavel Machek <pavel@....cz>
Cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Gautham R Shenoy <ego@...ibm.com>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Oleg Nesterov <oleg@...sign.ru>
Subject: Re: [RFD] CPU hotplug and suspend
Hi,
On Monday, 9 April 2007 16:03, Pavel Machek wrote:
> > Currently, we use the CPU hotplug to disable nonboot CPUs in the suspend code
> > paths, but with the recent change of code ordering (ie. nonboot CPUs are
> > disabled after freezing tasks _and_ devices) it has become quite troublesome.
> > The reason of this is that there are some CPU hotplug notifiers registered and
> > called on each run of cpu_up()/cpu_down() that assume the system to be fully
> > functional, which is not the case during the suspend. Moreover, at least some
> > of them do things that are not really necessary for disabling or enabling the
> > nonboot CPUs.
>
> Right.
>
> > The advantage of using the CPU hotplug (in its current form) for suspending is
> > that if some CPUs don't reappear during the resume, we are safe. Still, I
> > think it would be more appropriate, and simpler in the long run, to notify the
> > interested subsystems _only_ if one (or more) CPUs are not functional after the
> > resume.
>
> I'm afraid that adding 'cpu not there so simulate unplug' path will
> make it complex, and prone to failure, as _noone_ is going to test it.
Does it mean you think we should stick with the current approach and sort out
all issues as they show up, or should we go for not using the CPU hotplug for
suspending without implementing the 'cpu not there so simulate unplug' path
at all (eg. we can fail the resume instead)?
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists