[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902272158.31784.rjw@sisk.pl>
Date: Fri, 27 Feb 2009 21:58:29 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Pavel Machek <pavel@....cz>, Oliver Neukum <oliver@...kum.org>,
"Arve Hj?nnev?g" <arve@...roid.com>,
"Woodruff, Richard" <r-woodruff2@...com>,
Arjan van de Ven <arjan@...radead.org>,
Kyle Moffett <kyle@...fetthome.net>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
pm list <linux-pm@...ts.linux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Nigel Cunningham <nigel@...el.suspend2.net>,
Matthew Garrett <mjg59@...f.ucam.org>,
mark gross <mgross@...ux.intel.com>,
Uli Luckas <u.luckas@...d.de>,
Igor Stoppa <igor.stoppa@...ia.com>,
Brian Swetland <swetland@...gle.com>,
Len Brown <lenb@...nel.org>
Subject: Re: [RFD] Automatic suspend
On Friday 27 February 2009, Alan Stern wrote:
> On Fri, 27 Feb 2009, Pavel Machek wrote:
>
> >
> > > To summarize, we can:
> > > * Use a refcount such that automatic suspend will only be possible if it's
> > > equal to zero (but that need not be the only criterion).
> > > * Use a per-device flag in dev_pm_info that will be set whenever the device
> > > driver increases the refcount and unset whenever the driver decreases the
> > > refcount.
> > > * Use a per-process flag that will be set whenever the process increases the
> > > refcount and unset whenever the process decreases the refcount.
> >
> > Yes, that sounds sane, and that's how reasonable wakelock
> > implementation should look like.
>
> One small point: If you add a per-device flag and a per-process flag as
> described above, then drivers and processes must not acquire nested
> references.
>
> Obviously this is fixable, but it's worth mentioning...
Yes, it's important to remeber IMO.
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists