[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200912201352.07689.rjw@sisk.pl>
Date: Sun, 20 Dec 2009 13:52:07 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Dmitry Torokhov <dmitry.torokhov@...il.com>,
Zhang Rui <rui.zhang@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>
Subject: Re: Async suspend-resume patch w/ completions (was: Re: Async suspend-resume patch w/ rwsems)
On Sunday 20 December 2009, Alan Stern wrote:
> On Sun, 20 Dec 2009, Rafael J. Wysocki wrote:
>
> > So, seriously, do you think it makes sense to do asynchronous suspend at all?
> > I'm asking, because we're likely to get into troubles like this during suspend
> > for other kinds of devices too and without resolving them we won't get any
> > significant speedup from asynchronous suspend.
> >
> > That said, to me it's definitely worth doing asynchronous resume with the
> > "start asynch threads upfront" modification, as the results of the tests show
> > that quite clearly. I hope you agree.
>
> It's too early to come to this sort of conclusion (i.e., that suspend
> and resume react very differently to an asynchronous approach). Unless
> you have some definite _reason_ for thinking that resume will benefit
> more than suspend, you shouldn't try to generalize so much from tests
> on only two systems.
In fact I have one reason. Namely, the things that drivers do on suspend and
resume are evidently quite different and on these two systems I was able to
test they apparently took different amounts of time to complete.
The very fact that on both systems resume is substantially longer than suspend,
even if all devices are suspended and resumed synchronously, is quite
interesting.
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists