[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201205151942.36335.rjw@sisk.pl>
Date: Tue, 15 May 2012 19:42:36 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
Cc: Bojan Smojver <bojan@...ursive.com>,
Alan Stern <stern@...land.harvard.edu>,
Linux PM list <linux-pm@...r.kernel.org>,
Kernel development list <linux-kernel@...r.kernel.org>,
bp@...en8.de
Subject: Re: [PATCH]: In kernel hibernation, suspend to both
On Tuesday, May 15, 2012, Srivatsa S. Bhat wrote:
> On 05/15/2012 05:29 AM, Bojan Smojver wrote:
>
> > On Mon, 2012-05-14 at 21:47 +1000, Bojan Smojver wrote:
> >> No. That hangs my box.
> >>
> >> This triggers a bug in workqueues code (essentially the same as the
> >> previous patch, except for sys_sync() not being done):
> >
> > Alan/Srivatsa,
> >
> > Coming back to the explanation of how this whole thing works, it would
> > seem that at the point of image writing all devices are fully functional
> > (not just some, as I mistakenly believed). However, the processes are
> > supposed to be already frozen, right? Calling suspend_prepare(), which
> > will essentially try to freeze the processes and kernel threads, seems
> > like the wrong thing to do.
> >
> > Did you guys mean that we should be calling
> > pm_notifier_call_chain(PM_SUSPEND_PREPARE) only here?
> >
>
>
> Exactly! And also arrange for the corresponding PM_POST_SUSPEND notification
> to happen at the end of suspend-to-ram stage...
Actually, no. The notifiers are supposed to be called when user space is
available, otherwise some things will break badly (firmware loading for
one example IIRC).
So, I think we should pretend that this is all hibernation and, even if
the suspend to RAM phase succeeds, run PM_POST_HIBERNATION notifiers (which
I suppose is what the first Bojan's patch did, right?).
Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists