[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1006251056430.1604-100000@iolanthe.rowland.org>
Date: Fri, 25 Jun 2010 11:09:48 -0400 (EDT)
From: Alan Stern <stern@...land.harvard.edu>
To: "Rafael J. Wysocki" <rjw@...k.pl>
cc: Florian Mickler <florian@...kler.org>,
Linux-pm mailing list <linux-pm@...ts.linux-foundation.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Dmitry Torokhov <dmitry.torokhov@...il.com>,
Arve Hjønnevåg <arve@...roid.com>,
Neil Brown <neilb@...e.de>, mark gross <640e9920@...il.com>
Subject: Re: [update 2] Re: [RFC][PATCH] PM: Avoid losing wakeup events during
suspend
On Fri, 25 Jun 2010, Rafael J. Wysocki wrote:
> > That's not the point. If a wakeup handler queues a work item (for
> > example, by calling pm_request_resume) then it wouldn't need to guess a
> > timeout. The work item would be guaranteed to run before the system
> > could suspend again.
>
> You seem to be referring to the PM workqueue specifically. Perhaps it would be
> better to special-case it and stop it by adding a barrier work during suspend
> instead of just freezing? Then, it wouldn't need to be singlethread any more.
The barrier work would have to be queued to each CPU's thread. That
would be okay.
Hmm, it looks like wait_event_freezable() and
wait_event_freezable_timeout() could use similar changes: If the
condition is true then they shouldn't try to freeze the caller.
> Still, I think the timeout is necessary anyway in case the driver simply
> doesn't handle the event and user space needs time to catch up. Unfortunately,
> the PCI wakeup code doesn't know what happens next in advance.
That could all be handled by the lower driver. Still, a 100-ms timeout
isn't going to make a significant difference, since a suspend/resume
cycle will take a comparable length of time.
Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists