[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0906261707260.4155-100000@iolanthe.rowland.org>
Date: Fri, 26 Jun 2009 17:13:29 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: "Rafael J. Wysocki" <rjw@...k.pl>
cc: Greg KH <gregkh@...e.de>, LKML <linux-kernel@...r.kernel.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [patch update] PM: Introduce core framework for run-time PM of
I/O devices (rev. 6)
On Fri, 26 Jun 2009, Rafael J. Wysocki wrote:
> > It occurs to me that the problem would be solved if were a cancel_work
> > routine. In the same vein, it ought to be possible for
> > cancel_delayed_work to run in interrupt context. I'll see what can be
> > done.
>
> Having looked at the workqueue code I'm not sure if there's a way to implement
> that in a non-racy way. Which may be the reason why there are no such
> functions already. :-)
Well, I'll give it a try.
Speaking of races, have you noticed that the way power.work_done gets
used is racy? You can't wait for the completion before releasing the
lock, but then anything could happen.
A safer approach would be to use a wait_queue.
> In the meantime I reworked the patch (below) to use more RPM_* flags and I
> removed the runtime_break and runtime_notify bits from it. Also added some
> comments to explain some non-obvious steps (hope that helps).
>
> I also added the pm_runtime_put_atomic() and pm_runtime_put() as per the
> comment above.
>
> It seems to be a bit cleaner this way, but that's my personal view. :-)
I'll look at it over the weekend. And I'll try to see if proper
cancel_work and cancel_delayed_work functions can help clean it up.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists