[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0906111010360.2939-100000@iolanthe.rowland.org>
Date: Thu, 11 Jun 2009 10:16:54 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: Oliver Neukum <oliver@...kum.org>
cc: "Rafael J. Wysocki" <rjw@...k.pl>,
<linux-pm@...ts.linux-foundation.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [patch update] Re: [linux-pm] Run-time PM idea (was: Re:
[RFC][PATCH 0/2] PM: Rearrange core suspend code)
On Thu, 11 Jun 2009, Oliver Neukum wrote:
> Am Donnerstag, 11. Juni 2009 15:48:33 schrieb Rafael J. Wysocki:
> > > > But after pm_request_resume() returns there's no means to make sure
> > > > nothing alters it back to RPM_SUSPENDED. The workqueue doesn't help
> > > > you because you've scheduled nothing by that time. The suspension will
> > > > work because C is still in RPM_SUSPENDED.
> > >
> > > This is an example where usage counters come in handy.
> >
> > Do you mean we can count suspend/resume requests for a device?
>
> No, we count reasons a device cannot be suspended. Drivers are allowed to
> add their own reasons. The core uses that mechanism to indicate that an
> ongoing resumption lower down is also a reason.
> The count going to zero is equivalent to a request to suspend.
Right.
Here's a related thought. Change the resume routines as follows:
void pm_runtime_resume(struct device *dev)
{
// Do the actual resume ...
}
EXPORT_SYMBOL_GPL(pm_runtime_resume);
static void pm_runtime_resume_work(struct work_struct *work)
{
pm_runtime_resume(resume_work_to_device(work));
}
Then there's no need for a separate pm_resume_sync(); drivers can
simply call pm_runtime_resume() directly. The same trick works for
suspending.
Of course, this means you have to give up the notion that all suspends
and resumes are funnelled through the workqueue. IMO that notion isn't
worth keeping in any case.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists