[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200908092249.53401.rjw@sisk.pl>
Date: Sun, 9 Aug 2009 22:49:53 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Alan Stern <stern@...land.harvard.edu>
Cc: "Linux-pm mailing list" <linux-pm@...ts.linux-foundation.org>,
Magnus Damm <magnus.damm@...il.com>, Greg KH <gregkh@...e.de>,
Pavel Machek <pavel@....cz>, Len Brown <lenb@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH update x2] PM: Introduce core framework for run-time PM of I/O devices (rev. 13)
On Sunday 09 August 2009, Alan Stern wrote:
> On Sun, 9 Aug 2009, Rafael J. Wysocki wrote:
>
> > > > How exactly would you like to implement it
> > > > instead?
> > >
> > > As described above. The barrier would be equivalent to
> > > pm_runtime_get_noresume followed by pm_runtime_disable except that it
> > > wouldn't actually disable anything.
> >
> > OK, I can do that, but the only difference between that and the above sequence
> > of three calls will be the possibility to call resume helpers while the
> > "barrier" is in progress.
>
> Exactly. In other words, if the driver tries to carry out a resume
> while the barrier is running, the resume won't get lost. Whereas with
> the temporarily-disable approach, it _would_ get lost.
>
> > Allowing runtime PM helpers to be run during system sleep transitions would be
> > problematic IMHO, because the run-time PM 'states' are not well defined at that
> > time. Consequently, the rules that the PM helpers follow do not really hold
> > during system sleep transitions.
>
> The workqueue will be frozen, so runtime PM helpers will run only if
> they are invoked more or less directly by the driver (i.e., through
> pm_runtime_resume, ...). I think we should allow drivers to do what
> they want, especially between the "prepare" and "suspend" stages.
Well, I'm not sure if that's a good idea, but also I have no good techincal
arguments against it at the moment. And I'm too tired to argue. ;-)
> > Also, in principle the device driver's ->suspend() routine (the non-runtime
> > one), or even the ->prepare() callback, may notice that the remote wake-up has
> > happened and put the device back into the full power state and return -EBUSY.
>
> It may. But then again, it may not -- it may depend on the runtime PM
> core to make sure that resume requests get forwarded appropriately.
>
> Furthermore, if you disable runtime PM _before_ calling the prepare
> method, that leaves a window during which the driver has no reason to
> realize that anything unusual is going on.
>
> > Still, we can allow runtime PM requests to be put into the workqueue during
> > system sleep transitions, to be executed after the resume (or in case the
> > suspend fails, that will make the action described in the previous paragraph
> > somewhat easier). It seems we'd need a separate flag for it, though.
>
> If every device gets resumed at the end of a system sleep, even the
> ones that were runtime-suspended before the sleep began, then there's
> no reason to preserve requests in the workqueue. But if
> previously-suspended devices don't get resumed at the end of a system
> sleep, then we should allow requests to remain in the workqueue.
We also should preserve the requests in case the system sleep transition
fails.
> In the end, it's probably safer and easier just to leave the workqueue
> alone -- freeze and unfreeze it, but don't meddle with its contents.
>
> The whole question of remote wakeup vs. runtime suspend vs. system
> sleep is complicated, and people haven't dealt with all the issues yet.
Agreed.
> For instance, it seems quite likely that with some devices you would
> want to enable remote wakeup during runtime suspend but not during
> system sleep. We don't have any good way to do this.
Yes, for now we have to assume that any device with wakeup enabled is a
wakeup device.
OK, I'll post the new version of the patch shortly. Please check if the
barrier mechanism is implemeted and used correctly.
Best,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists