lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 9 Aug 2009 11:19:51 -0400 (EDT)
From:	Alan Stern <stern@...land.harvard.edu>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
cc:	Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>,
	Magnus Damm <magnus.damm@...il.com>, Greg KH <gregkh@...e.de>,
	Pavel Machek <pavel@....cz>, Len Brown <lenb@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH update x2] PM: Introduce core framework for run-time PM
 of I/O devices (rev. 13)

On Sun, 9 Aug 2009, Rafael J. Wysocki wrote:

> > >  How exactly would you like to implement it
> > > instead?
> > 
> > As described above.  The barrier would be equivalent to
> > pm_runtime_get_noresume followed by pm_runtime_disable except that it
> > wouldn't actually disable anything.
> 
> OK, I can do that, but the only difference between that and the above sequence
> of three calls will be the possibility to call resume helpers while the
> "barrier" is in progress.

Exactly.  In other words, if the driver tries to carry out a resume
while the barrier is running, the resume won't get lost.  Whereas with 
the temporarily-disable approach, it _would_ get lost.

> Allowing runtime PM helpers to be run during system sleep transitions would be
> problematic IMHO, because the run-time PM 'states' are not well defined at that
> time.  Consequently, the rules that the PM helpers follow do not really hold
> during system sleep transitions.

The workqueue will be frozen, so runtime PM helpers will run only if
they are invoked more or less directly by the driver (i.e., through
pm_runtime_resume, ...).  I think we should allow drivers to do what
they want, especially between the "prepare" and "suspend" stages.

> Also, in principle the device driver's ->suspend() routine  (the non-runtime
> one), or even the ->prepare() callback, may notice that the remote wake-up has
> happened and put the device back into the full power state and return -EBUSY.

It may.  But then again, it may not -- it may depend on the runtime PM  
core to make sure that resume requests get forwarded appropriately.

Furthermore, if you disable runtime PM _before_ calling the prepare 
method, that leaves a window during which the driver has no reason to 
realize that anything unusual is going on.

> Still, we can allow runtime PM requests to be put into the workqueue during
> system sleep transitions, to be executed after the resume (or in case the
> suspend fails, that will make the action described in the previous paragraph
> somewhat easier).  It seems we'd need a separate flag for it, though.

If every device gets resumed at the end of a system sleep, even the
ones that were runtime-suspended before the sleep began, then there's
no reason to preserve requests in the workqueue.  But if
previously-suspended devices don't get resumed at the end of a system
sleep, then we should allow requests to remain in the workqueue.

In the end, it's probably safer and easier just to leave the workqueue 
alone -- freeze and unfreeze it, but don't meddle with its contents.

The whole question of remote wakeup vs. runtime suspend vs. system 
sleep is complicated, and people haven't dealt with all the issues yet.  
For instance, it seems quite likely that with some devices you would 
want to enable remote wakeup during runtime suspend but not during 
system sleep.  We don't have any good way to do this.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ