lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1006211055090.1687-100000@iolanthe.rowland.org>
Date:	Mon, 21 Jun 2010 11:06:27 -0400 (EDT)
From:	Alan Stern <stern@...land.harvard.edu>
To:	David Brownell <david-b@...bell.net>
cc:	Florian Mickler <florian@...kler.org>, <markgross@...gnar.org>,
	mark gross <640e9920@...il.com>, Neil Brown <neilb@...e.de>,
	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>
Subject: Re: [linux-pm] [RFC][PATCH] PM: Avoid losing wakeup events during
 suspend

On Sun, 20 Jun 2010, David Brownell wrote:

> Can we put this more directly:  the problem is
> that the *SYSTEM ISN'T FULLY SUSPENDED* when the
> hardware wake event triggers?  (Where "*SYSTEM*
> includes userspace not just kernel.  In fact the
> overall system is built from many subsystems,
> some in the kernel and some in userspace.

Indeed, the system may not even be partially suspended when the wake 
event triggers.

> At the risk of being prematurely general:  I'd
> point out that these subsystems probably have
> sequencing requirements.  kernel-then-user is
> a degenerate case, and surely oversimplified.
> There are other examples, e.g. between kernel
> subsystems...  Like needing to suspend a PMIC
> before the bus it uses, where that bus uses
> a task to manage request/response protocols.
> (Think I2C or SPI.)
> 
> This is like the __init/__exit sequencing mess...
> 
> In terms of userspace event delivery, I'd say
> it's a bug in the event mechanism if taking the
> next step in suspension drops any event.  It
> should be queued, not lost...  As a rule the
> hardware queuing works (transparently)...

There may be a misunderstanding here...  People talk about events
getting lost, but what they (usually) mean is that the event isn't
actually _dropped_ -- rather, it fails to trigger a wakeup or to
prevent a suspend.  When something else causes the system to resume
later on, the event will be delivered normally.

This means that the problem is not one of sequencing.  The problem is 
twofold:

	To recognize when a wakeup event has occurred and therefore
	it is not now safe to allow the system to suspend;

	And to recognize when a wakeup event has been completely
	handled and therefore it is once again safe to allow the system 
	to suspend.

> > Of course, the underlying
> > > > issue here is that the kernel has no direct way
> > to know when userspace
> > > > has finished processing an event.
> 
> 
> Again said more directly:  there's no current
> mechanism to coordinate subsystems.  Userspace
> can't communicate "I'm ready" to kernel, and
> vice versa.  (a few decades ago, APM could do
> that ... we dropped such mechanisms though, and
> I'm fairly sure APM's implementation was holey.)

Yes, that's a better way of putting it.  And it's not just a matter of
"userspace communicating with the kernel", because userspace is not
monolithic.  There has to be a way for one user process to communicate
this information to another (I like Florian's idea).  Of course, the
kernel doesn't have to worry about those details.

If one accepts a scheme in which all the suspend initiations and
cancellations are carried out by a single process (a power-manager
process), then the difficulties of communication and coordination
between the kernel and userspace are minimized.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ