[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902182217.48321.rjw@sisk.pl>
Date: Wed, 18 Feb 2009 22:17:46 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Arve Hjønnevåg <arve@...roid.com>
Cc: Alan Stern <stern@...land.harvard.edu>,
"Woodruff, Richard" <r-woodruff2@...com>,
Arjan van de Ven <arjan@...radead.org>,
Kyle Moffett <kyle@...fetthome.net>,
Oliver Neukum <oliver@...kum.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, Pavel Machek <pavel@....cz>,
Nigel Cunningham <nigel@...el.suspend2.net>,
Matthew Garrett <mjg59@...f.ucam.org>,
mark gross <mgross@...ux.intel.com>,
Uli Luckas <u.luckas@...d.de>,
Igor Stoppa <igor.stoppa@...ia.com>,
Brian Swetland <swetland@...gle.com>,
Len Brown <lenb@...nel.org>
Subject: Re: [RFD] Automatic suspend
On Wednesday 18 February 2009, Arve Hjønnevåg wrote:
> On Tue, Feb 17, 2009 at 3:21 PM, Rafael J. Wysocki <rjw@...k.pl> wrote:
> > On Tuesday 17 February 2009, Alan Stern wrote:
> >> On Tue, 17 Feb 2009, Rafael J. Wysocki wrote:
> >>
> >> > Phase 1: I agree that system-auto-suspend-on, system-auto-suspend-off would be
> >> > useful, but I don't like the wakelocks interface. Do you think there is an
> >> > alternative way/mechanism of doing this?
> >>
> >> I rather like the suggestions Matthew Garrett has been making. They
> >> show how to improve the wakelock interface without losing any function.
> >>
> >> Really, the idea behind wakelocks comes down to the question of how to
> >> determine when the system is sufficiently idle to go into auto-suspend.
> >> There may be occasions when no task is runnable but userspace knows
> >> that the system should not go to sleep because some work will be done
> >> in the near future. (Arve's example of a non-empty input buffer is
> >> such a case.) How should userspace let the kernel know whether it's
> >> okay to suspend at these times? That is the problem userspace
> >> wakelocks are meant to solve.
> >
> > Still, do we really need multiple user space wakelocks (I'd prefer to call them
> > sleeplocks)? It seems that one such lock and a user space manager controlling
> > it should be sufficient.
>
> Yes, we could have a user space manager that all userspace wakelocks
> go through, but it would have to start before any other processes that
> need wakelocks and it would need a blocking ipc mechanism. The
> wakelock api that is provided to android applications does all this,
> but it is only available to java code. Supporting multiple userspace
> wakelocks in the kernel is simpler than adding another userspace
> wakelock layer.
>
> >> Kernel wakelocks are a separate matter. They are more like a form of
> >> optimization, preventing the kernel from starting an auto-suspend when
> >> some driver knows beforehand that it will return -EBUSY.
> >
> > I think kernel-side autosuspend (or rather autosleep) should only happen
> > after certain subset of devices have been suspended using a per-device
> > run-time autosuspend mechanism.
>
> When the last wakelock is released the task that we woke up to perform
> has finished. Why wait to re-enter suspend.
I don't really understand this comment. Could you please explain a bit?
> >> > Phase 3: Probably explicit control left to open/close.
> >>
> >> While that's generally a good idea, it's important to recognize that
> >> some devices should be runtime-suspended even while they are open.
> >
> > From the kernel side, yes (and that should be transparent to the user space
> > having them open). By the user space, no.
>
> Allowing user space to suspend input devices while they are still open
> is useful. The user-space code that reads from the input devices does
> not need to know if the device is suspended or not, and the kernel
> cannot auto suspend input devices based on inactivity.
Hmm. Why can't it?
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists