[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200908301515.11525.rjw@sisk.pl>
Date: Sun, 30 Aug 2009 15:15:11 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Alan Stern <stern@...land.harvard.edu>
Cc: "linux-pm" <linux-pm@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, Len Brown <lenb@...nel.org>,
Pavel Machek <pavel@....cz>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Arjan van de Ven <arjan@...radead.org>,
Zhang Rui <rui.zhang@...el.com>,
Dmitry Torokhov <dmitry.torokhov@...il.com>,
Linux PCI <linux-pci@...r.kernel.org>
Subject: Re: [PATCH 2/6] PM: Asynchronous resume of devices
On Sunday 30 August 2009, Alan Stern wrote:
> On Sat, 29 Aug 2009, Rafael J. Wysocki wrote:
>
> > I only wanted to say that the advantage is not really that "big". :-)
> >
> > > I must agree, 14 threads isn't a lot. But at the moment that number is
> > > random, not under your control.
> >
> > It's not directly controlled, but there are some interactions between the
> > async threads, the main threads and the async framework that don't allow this
> > number to grow too much.
> >
> > IMO it sometimes is better to allow things to work themselves out, as long as
> > they don't explode, than to try to keep everything under strict control. YMMV.
>
> For testing purposes it would be nice to have a one-line summary for
> each device containing a thread ID, start timestamp, end timestamp, and
> elapsed time. With that information you could evaluate the amount of
> parallelism and determine where the bottlenecks are. It would give a
> much more detailed picture of the entire process than the total time of
> your recent patch 9.
Of course it would. I think I'll implement it.
The purpose of patch 9 is basically to allow one to see how much time is
spent on the handling of devices overall and to compare that with the time
spent on the other operation during suspend-resume.
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists