[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0912082125410.26994-100000@netrider.rowland.org>
Date: Tue, 8 Dec 2009 21:35:59 -0500 (EST)
From: Alan Stern <stern@...land.harvard.edu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
cc: "Rafael J. Wysocki" <rjw@...k.pl>, Zhang Rui <rui.zhang@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>
Subject: Re: Async resume patch (was: Re: [GIT PULL] PM updates for 2.6.33)
On Tue, 8 Dec 2009, Linus Torvalds wrote:
> It's just that I think the "looping over children" is ugly, when I think
> that by doing it the other way around you can make the code simpler and
> only depend on the PM device list and a simple parent pointer access.
I agree that it is uglier. The only advantage is in handling
asynchronous non-tree suspend dependencies, of which we probably won't
have very many. In fact, I don't know of _any_ offhand.
Interestingly, this non-tree dependency problem does not affect resume.
> I also think that you are wrong that the above somehow protects against
> non-topological dependencies. If the device you want to keep delay
> yourself suspending for is after you in the list, the down_read() on that
> may succeed simply because it hasn't even done its down_write() yet and
> you got scheduled early.
You mean, if A comes before B in the list and A must suspend after B?
Then A's down_read() on B _can't_ occur before B's down_write() on
itself. The down_write() on B happens before the
list_for_each_entry_reverse() iteration reaches A; it even happens
before B's async task is launched.
> But I guess you could do that by walking the list twice (first to lock
> them all, then to actually call the suspend function). That whole
> two-phase thing, except the first phase _only_ locks, and doesn't do any
> callbacks.
Not necessary.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists