[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201202232226.43104.rjw@sisk.pl>
Date: Thu, 23 Feb 2012 22:26:42 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
Cc: John Stultz <john.stultz@...aro.org>,
Linux PM list <linux-pm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Magnus Damm <magnus.damm@...il.com>, markgross@...gnar.org,
Matthew Garrett <mjg@...hat.com>,
Greg KH <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Brian Swetland <swetland@...gle.com>,
Neil Brown <neilb@...e.de>,
Alan Stern <stern@...land.harvard.edu>,
Dmitry Torokhov <dmitry.torokhov@...il.com>
Subject: Re: [RFC][PATCH 0/7] PM: Implement autosleep and "wake locks", take2
On Thursday, February 23, 2012, Srivatsa S. Bhat wrote:
> On 02/23/2012 03:40 AM, Rafael J. Wysocki wrote:
>
> > On Wednesday, February 22, 2012, Srivatsa S. Bhat wrote:
> >> On 02/22/2012 10:19 AM, John Stultz wrote:
> >>
> >>> On Wed, 2012-02-22 at 00:31 +0100, Rafael J. Wysocki wrote:
> >>>> Hi all,
> >>>>
> >>>> After the feedback so far I've decided to follow up with a refreshed patchset.
> >>>> The first two patches from the previous one went to linux-pm/linux-next
> >>>> and I included the recent evdev patch from Arve (with some modifications)
> >>>> to this patchset for completness.
> >>>
> >>> Hey Rafael,
> >>> Thanks again for posting this! I've started playing around with it in a
> >>> kvm environment, and got the following warning after echoing off >
> >>> autosleep:
> >>> ...
> >>> PM: resume of devices complete after 185.615 msecs
> >>> PM: Finishing wakeup.
> >>> Restarting tasks ... done.
> >>> PM: Syncing filesystems ... done.
> >>> PM: Preparing system for mem sleep
> >>> Freezing user space processes ...
> >>> Freezing of tasks failed after 20.01 seconds (1 tasks refusing to freeze, wq_busy=0):
> >>> bash D ffff880015714010
> >>
> >>
> >> Ah.. I think I know what is the problem here..
> >>
> >> The kernel was freezing userspace processes and meanwhile, you wrote "off"
> >> to autosleep. So, as a result, this userspace process (bash) just now
> >> entered kernel mode. Unfortunately, the autosleep_lock is held for too long,
> >> that is, something like:
> >>
> >> acquire autosleep_lock
> >> modify autosleep_state
> >> <============== "A"
> >> pm_suspend or hibernate()
> >>
> >> release autosleep_lock
> >>
> >> At point marked "A", we should have released the autosleep lock and only then
> >> entered pm_suspend or hibernate(). Since the current code holds the lock and
> >> enters suspend/hibernate, the userspace process that wrote "off" to autosleep
> >> (or even userspace process that writes to /sys/power/state will end up waiting
> >> on autosleep_lock, thus failing the freezing operation.)
> >>
> >> So the solution is to always release the autosleep lock before entering
> >> suspend/hibernation.
> >
> > Well, the autosleep lock is intentionally held around suspend/hibernation in
> > try_to_suspend(), because otherwise it would be possible to trigger automatic
> > suspend right after user space has disabled it.
> >
>
>
> Hmm.. I was just wondering if we could avoid holding yet another lock in the
> suspend/hibernate path, if possible..
>
>
> > I think the solution is to make pm_autosleep_lock() do a _trylock() and
> > return error code if already locked.
> >
>
> ... and also do a trylock() in pm_autosleep_set_state() right?.... that is
> where John hit the problem..
>
> By the way, I am just curious.. how difficult will this make it for userspace
> to disable autosleep? I mean, would a trylock mean that the user has to keep
> fighting until he finally gets a chance to disable autosleep?
That's a good point, so I think it may be a good idea to do
mutex_lock_interruptible() in pm_autosleep_set_state() instead.
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists