[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1101312235190.11313-100000@netrider.rowland.org>
Date: Mon, 31 Jan 2011 22:40:44 -0500 (EST)
From: Alan Stern <stern@...land.harvard.edu>
To: Kevin Hilman <khilman@...com>
cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Grant Likely <grant.likely@...retlab.ca>,
Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>,
Greg KH <greg@...ah.com>, LKML <linux-kernel@...r.kernel.org>,
Magnus Damm <magnus.damm@...il.com>,
Len Brown <lenb@...nel.org>
Subject: Re: [RFC][PATCH] Power domains for platform bus type
On Mon, 31 Jan 2011, Kevin Hilman wrote:
> For the on-chip SoC devices we're managing with OMAP, we're currently
> only using one set: post ops on [runtime_]suspend and pre ops on
> [runtime_]resume.
>
> However, I could imagine (at least conceptually) using the pre ops on
> suspend to do some constraints checking and/or possibly some
> management/notification of dependent devices. Another possiblity
> (although possibly racy) would be using the pre ops on suspend to
> initiate some high-latency operations.
Dependency management is very relevant here, since we're talking about
relations that explicitly aren't of the parent-child type. If any of
the devices in question get marked for async suspend/resume, for
example, they certainly will need dependency handling.
> I guess the main problem with two sets is wasted space. e.g, if I move
> OMAP to this (already hacking on it) there will be only 2 functions used
> in post ops: [runtime_]suspend() and 2 used in pre ops [runtime_]_resume().
The wasted space is minimal; we're only talking about one extra pm_ops
structure for each power domain. Presumably any reasonable SoC isn't
going to have a tremendous number of separate power domains. Or am I
wrong about this?
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists