[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100511172442.GB13931@atomide.com>
Date: Tue, 11 May 2010 10:24:43 -0700
From: Tony Lindgren <tony@...mide.com>
To: Matthew Garrett <mjg@...hat.com>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Kevin Hilman <khilman@...prootsystems.com>,
Arve Hjønnevåg <arve@...roid.com>,
linux-pm@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
Alan Stern <stern@...land.harvard.edu>,
Tejun Heo <tj@...nel.org>, Oleg Nesterov <oleg@...hat.com>,
Paul Walmsley <paul@...an.com>, magnus.damm@...il.com,
mark gross <mgross@...ux.intel.com>,
Arjan van de Ven <arjan@...radead.org>,
Geoff Smith <geoffx.smith@...el.com>,
Brian Swetland <swetland@...gle.com>
Subject: Re: [PATCH 0/8] Suspend block api (version 6)
* Matthew Garrett <mjg@...hat.com> [100511 09:59]:
> On Tue, May 11, 2010 at 09:58:21AM -0700, Tony Lindgren wrote:
> > * Matthew Garrett <mjg@...hat.com> [100511 09:41]:
> > > Yes. You still need suspend blocks.
> >
> > Maybe not if echo opportunistic > /sys/power/policy gets cleared by
> > the kernel if the kernel idle loop can't make it. That means something
> > has blocked the suspend attempt in a driver for example. The system
> > keeps running, and the userspace can deal with the situation.
>
> So an event arrives just as userspace does this write. How do you avoid
> the race? Plausible answers mostly appear to end up looking like suspend
> blockers.
Assuming you attempt suspend in a custom pm_idle function, any driver
handling the event can fail the suspend attempt.
And that would clear the opportunistic suspend flag. And the userspace
would be still running and could handle the event. And when the userspace
is done, it can again echo opportunistic > /sys/power/policy.
For the failed suspend path in the kernel, currently the kernel would
unwind back all the drivers because of the failed driver, but that path
should be possible to optimize.
Regards,
Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists