[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0912100739260.3560@localhost.localdomain>
Date: Thu, 10 Dec 2009 07:45:14 -0800 (PST)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Alan Stern <stern@...land.harvard.edu>
cc: "Rafael J. Wysocki" <rjw@...k.pl>, Zhang Rui <rui.zhang@...el.com>,
LKML <linux-kernel@...r.kernel.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>
Subject: Re: Async suspend-resume patch w/ completions (was: Re: Async
suspend-resume patch w/ rwsems)
On Thu, 10 Dec 2009, Alan Stern wrote:
>
> In device_pm_remove():
>
> mutex_lock(&dpm_list_mtx);
> if (dev == dpm_next)
> dpm_next = to_device(dpm_iterate_forward ?
> dev->power.entry.next : dev->power.entry.prev);
> list_del_init(&dev->power.entry);
> mutex_unlock(&dpm_list_mtx);
I'm really not seeing the point - it's much better to hardcode the
ordering in the place you use it (where it is static and the compiler can
generate bette code) than to do some dynamic choice that depends on some
fake flag - especially a global one.
Also, quite frankly, error handling needs to be separated out of the whole
async patch, and needs to be thought about a lot more. And I would
seriously argue that if you have any async suspends, then those async
suspends are _not_ allowed to fail. At least not initially
Having async failures and trying to fix them up is just a disaster. Which
ones actually failed, and which ones were aborted before they even really
got to their suspend routines? Which ones do you try to resume?
IOW, it needs way more thought than what has clearly happened so far. And
once more, I will refuse to merge anything that is complicated for no
actual reason (where reason is "real life, and tested to make a big
difference", not some hand-waving)
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists