lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0906291504050.17436-100000@iolanthe.rowland.org>
Date:	Mon, 29 Jun 2009 15:25:57 -0400 (EDT)
From:	Alan Stern <stern@...land.harvard.edu>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
cc:	Greg KH <gregkh@...e.de>, LKML <linux-kernel@...r.kernel.org>,
	ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
	Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [patch update] PM: Introduce core framework for run-time PM of
 I/O devices (rev. 6)

On Mon, 29 Jun 2009, Rafael J. Wysocki wrote:

> IMO one can think of pm_request_resume() as a top half of pm_runtime_resume().

Normal top halves don't trigger before the circumstances are
appropriate.  For example, if you enable remote wakeup on a USB device,
it won't send a wakeup signal before it has been powered down.  A
driver calling pm_request_resume while the device is still resumed is
like a USB device sending a wakeup request while it is still powered
up.  So IMO the analogy with top halves isn't a good one.

> Thus, it should either queue up a request to run pm_runtime_resume() or leave
> the status as though pm_runtime_resume() ran.  Anything else would be
> internally inconsistent.  So, if pm_runtime_resume() cancels pending suspend
> requests, pm_request_resume() should do the same or the other way around.
> 
> Now, arguably, ignoring pending suspend requests is somewhat easier from
> the core's point of view, but it may not be so for drivers.

The argument I gave in the previous email demonstrates that it doesn't
make any difference to drivers.  Either way, they have to use two I/O
pathways, they have to do a pm_runtime_get before pm_request_resume,
and they have to do a pm_request_put after the I/O is done.

Of course, this is all somewhat theoretical.  I still don't know of any 
actual drivers that do the equivalent of pm_request_resume.

> My point is that the core should always treat pending suspend requests in the
> same way.  If they are canceled by pm_runtime_resume(), then
> pm_request_resume() should also cancel them and it shouldn't be possible
> to schedule a suspend request when the resume counter is greater than 0.
> In turn, if they are ignored by pm_runtime_resume(), then pm_request_resume()
> should also ignore them and there's no point to prevent pm_request_suspend()
> from scheduling a suspend request if the resume counter is greater than 0.
> 
> Any other type of behavior has a potential to confuse driver writers.

Another possible approach you could take when the call to
cancel_delayed_work fails (which should be rare) is to turn on RPM_WAKE
in addition to RPM_IDLE and leave the suspend request queued.  When
__pm_runtime_suspend sees both flags are set, it should abort and set
the status directly back to RPM_ACTIVE.  At that time the idle
notifications can start up again.

Is this any better?  I can't see how drivers would care, though.

Alan Stern

P.S.: What do you think should happen if there's a delayed suspend
request pending, then pm_request_resume is called (and it leaves the
request queued), and then someone calls pm_runtime_suspend?  You've got
two pending requests and a synchronous call all active at the same
time!

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ