[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1110161018580.28864-100000@netrider.rowland.org>
Date: Sun, 16 Oct 2011 10:51:01 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: "Rafael J. Wysocki" <rjw@...k.pl>
cc: NeilBrown <neilb@...e.de>,
Linux PM list <linux-pm@...r.kernel.org>,
mark gross <markgross@...gnar.org>,
LKML <linux-kernel@...r.kernel.org>,
John Stultz <john.stultz@...aro.org>
Subject: Re: [RFC][PATCH 0/2] PM / Sleep: Extended control of suspend/hibernate
interfaces
On Sat, 15 Oct 2011, Alan Stern wrote:
> Basically, what we need is a reliable way to intercept the existing
> mechanisms for suspend/hibernate and to redirect the requests to the PM
> daemon. When the daemon is started up in "legacy" mode, it assumes
> there is a legacy client (representing the entire set of
> non-wakeup-aware programs) that always forbids suspend _except_ when
> one of the old mechanisms is invoked.
The more I think about this, the better it seems. In essence, it
amounts to "virtualizing" the existing PM interface.
Let's add /sys/power/manage, and make it single-open. Whenever that
file is open, writes to /sys/power/state and /dev/snapshot don't work
normally; instead they get forwarded over /sys/power/manage (and
results get sent back). Suspend is easy; hibernation (because of its
multi-step nature) will be more difficult.
The only important requirement is that processes can use poll system
calls to wait for wakeup events. This may not always be true (consider
timer expirations, for example), but we ought to be able to make some
sort of accomodation.
The PM daemon will communicate with its clients over a Unix-domain
socket. The protocol can be extremely simple: The daemon sends a byte
to the client when it wants to sleep, and the client sends the byte
back when it is ready to allow the system to go to sleep. There's
never more than one byte outstanding at any time in either direction.
The clients would be structured like this:
Open a socket connection to the PM daemon.
Loop:
Poll on possible events and the PM socket.
If any events occurred, handle them.
Otherwise if a byte was received from the PM daemon,
send it back.
In non-legacy mode, the PM daemon's main loop is also quite simple:
1. Read /sys/power/wakeup_count.
2. For each client socket:
If a response to the previous transmission is still
pending, wait for it.
Send a byte (the data can be just a sequence number).
Wait for the byte to be echoed back.
3. Write /sys/power/wakeup_count.
4. Write a sleep command to /sys/power/manage.
A timeout can be added to step 2 if desired, but in this mode it isn't
needed.
With legacy support enabled, we probably will want something like a
1-second timeout for step 2. We'll also need an extra step at the
beginning and one at the end:
0. Wait for somebody to write "standy" or "mem" to
/sys/power/state (received via the /sys/power/manage file).
5. Send the final status of the suspend command back to the
/sys/power/state writer.
Equivalent support for hibernation is left as an exercise for the
reader.
Obviously the PM daemon will need a secondary thread to accept new
incoming socket connections, and these connections will have to be
synchronized with the end of the iteration in step 2 (i.e., don't
accept new connections between the end of step 2 and the end of step
4).
Initial startup of the daemon will be a little tricky, because it
shouldn't start carrying out suspends until some clients have had a
chance to connect. For that matter, in non-legacy mode the daemon
might not want to initiate suspends when there are no clients -- the
system would never get anything done because it would go back to sleep
as soon as the kernel finished processing each wakeup event.
This really seems like it could work, and it wouldn't be tremendously
complicated. The only changes needed in the kernel would be the
"virtualization" (or forwarding) mechanism for legacy support.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists