[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902281118.14595.rjw@sisk.pl>
Date: Sat, 28 Feb 2009 11:18:12 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Arve Hjønnevåg <arve@...roid.com>
Cc: Pavel Machek <pavel@....cz>,
Alan Stern <stern@...land.harvard.edu>,
"Woodruff, Richard" <r-woodruff2@...com>,
Arjan van de Ven <arjan@...radead.org>,
Kyle Moffett <kyle@...fetthome.net>,
Oliver Neukum <oliver@...kum.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Nigel Cunningham <nigel@...el.suspend2.net>,
Matthew Garrett <mjg59@...f.ucam.org>,
mark gross <mgross@...ux.intel.com>,
Uli Luckas <u.luckas@...d.de>,
Igor Stoppa <igor.stoppa@...ia.com>,
Brian Swetland <swetland@...gle.com>,
Len Brown <lenb@...nel.org>
Subject: Re: [RFD] Automatic suspend
On Friday 27 February 2009, Arve Hjønnevåg wrote:
> On Fri, Feb 27, 2009 at 12:55 PM, Rafael J. Wysocki <rjw@...k.pl> wrote:
> > On Friday 27 February 2009, Pavel Machek wrote:
> >> On Fri 2009-02-27 15:22:39, Rafael J. Wysocki wrote:
> >> > On Friday 27 February 2009, Pavel Machek wrote:
> >> > > Hi!
> >> > >
> >> > > > > > Then, the decision making logic will be able to use /sys/power/sleep whenever
> >> > > > > > it wishes to and the kernel will be able to refuse to suspend if it's not
> >> > > > > > desirable at the moment.
> >> > > > > >
> >> > > > > > It seems to be flexible enough to me.
> >> > > > >
> >> > > > > This seems flexible enough to avoid race conditions, but it forces the
> >> > > > > user space power manager to poll when the kernel refuse suspend.
> >> > > >
> >> > > > And if the kernel is supposed to start automatic suspend, it has to monitor
> >> > > > all of the wakelocks. IMO, it's better to allow the power manager to poll the
> >> > > > kernel if it refuses to suspend.
> >> > >
> >> > > polling is evil -- it keeps CPU wake up => wastes power.
> >> > >
> >> > > Wakelocks done right are single atomic_t... and if you set it to 0,
> >> > > you just unblock "sleeper" thread or something. Zero polling and very
> >> > > simple...
> >> >
> >> > Except that you have to check all of the wakelocks periodically in a loop =>
> >> > polling. So?
> >>
> >> No. I want to have single atomic_t for all the wakelocks... at least
> >> in non-debug version. Debug version will be slower. I believe you
> >> originally suggested that.
> >
> > I did, but please don't call it "wakelocks". It's confusing.
>
> What you are talking about here is mostly an optimization of the
> wakelock api. You have removed timeout support and made each wakelock
> reference counted.
I also removed the arbitrary number of wakelocks (I really _hate_ the name,
can we please stop using it from now on?).
> If you ignore wakelocks with timeouts, the current
> wakelock interface can be implemented with a global atomic_t to
> prevent suspend, and a per wakelock atomic_t to prevent a single
> client from changing the global reference count by more than one.
>
> There are a couple of reasons that I have not done this. It removes
> the ability to easily inspect the system when it is not suspending.
Care to elaborate?
> I do provide an option to turn off the wakelock stats, which makes
> wake_lock/unlock significantly faster, but we never run with wakelock
> stats off. Also, it pushes timeout handling to the drivers. I know may
> of you don't like timeout support, but ignoring the problem is not a
> solution. If each driver that needs timeouts uses its own timer, then
> you will often wakeup from idle just to unlock a wakelock that will
> not trigger suspend. This wakeup is a thousand times as costly on the
> msm platform as a wakelock/unlock pair (with wakelock stats enabled).
Well, at least a couple of people told you that the timeouts are hardly
acceptable and they told you why. Please stop repeating the same arguments you
have given already for a couple of times. They're not convincing.
Instead of trying to convince everyone to accept your solution that you're
not willing to change, please try to listen and think how to do things
differently so that everyone is comfortable with them. I'm sure you're more
than capable of doing that.
I do realize that you consider your current solution as the best thing since
the sliced bread, but please accept the fact that the other people think
differently.
> I just checked my phone, and over a 24 hour awake time (370 hours
> uptime) period, it acquired about 5 million wakelocks (mostly for
> input events). If these were cache hits, and took as long as my
> benchmark did, that accounts for 20 seconds of overhead (0.023% of
> awake, 0.1% of not-idle (5.5h).
Which proves what exactly?
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists