[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200802201344.11643.jesse.barnes@intel.com>
Date: Wed, 20 Feb 2008 13:44:11 -0800
From: Jesse Barnes <jesse.barnes@...el.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
Jeff Chua <jeff.chua.linux@...il.com>,
lkml <linux-kernel@...r.kernel.org>,
Dave Airlie <airlied@...ux.ie>, linux-acpi@...r.kernel.org,
suspend-devel List <suspend-devel@...ts.sourceforge.net>,
Greg KH <gregkh@...e.de>
Subject: Re: 2.6.25-rc2 System no longer powers off after suspend-to-disk. Screen becomes green.
On Wednesday, February 20, 2008 1:13 pm Linus Torvalds wrote:
> On Wed, 20 Feb 2008, Jesse Barnes wrote:
> > The current callback system looks like this (according to Rafael and the
> > last time I looked):
> > ->suspend(PMSG_FREEZE)
> > ->resume()
> > ->suspend(PMSG_SUSPEND)
> > *enter S3 or power off*
> > ->resume()
>
> Yes, it's very messy.
>
> It's messy for a few different reasons:
>
> - the one you hit: a driver actually has a really hard time telling what
> PMSG_SUSPEND really means.
>
> - more importantly, we generally don't want to "suspend/resume" the
> hardware at all around a power-off, because we're going to resume with
> the state at the time of the PMSG_FREEZE, which means that the hardware
> has actually *changed* and been used in between!
Exactly.
> So the "->resume" really isn't a resume at all. It's much closer to a
> "->reset".
Yeah, in the hibernate case this is definitely true.
> Of course, the "solution" to this all right now is that we have to reset
> everything even if it *is* a suspend event, so it basically means that STR
> ends up using the much weaker model that snapshot-to-disk uses.
>
> The fundamental problem being that the two really have nothing
> what-so-ever to do with each other. They aren't even similar. Never were.
>
> > And in the long term we could have:
> > ->suspend()
> > *enter S3*
> > ->resume()
>
> Yes, apart from all the complexities (suspend_late/resume_early). So in
> reality it's more than that, but the suspend/resume things are clearly
> nesting, and they have the potential to actually keep state around
> (because we *know* this machine is not going to mess with the devices in
> between).
Really, in the simple s3 case we still need early/late stuff?
> IOW, here we actually can have as an option "assume the device is there
> when you return".
>
> > or:
> > ->hibernate()
> > *kexec to another kernel to save image*
> > *power off*
> > ->return_from_hibernate() (or somesuch)
>
> Enough people don't trust kexec that I suspect the right thing simply is
>
> ->freeze() // stop dma, synchronize device state
> *snapshot*
> ->unfreeze(); // resume dma
> *save image*
> [ optionally ->poweroff() ] // do we really care? I'd say no
> *power off*
> ->restore() // reset device to the frozen one
>
> which may have four entry-points that can be illogically mapped to the
> suspend/resume ones like we do now, but they really have nothing to do
> with suspending/resuming.
Well, it seems like we'll have to fix drivers in either case, and isn't a
kexec approach fundamentally more sound and simple, design-wise? Rafael
pointed out some problems with properly setting wakeup states, but I think
that could be overcome...
> And notice how while "freeze/restore" kind of pairs like a
> "suspend/resume", it really shouldn't be expected to realistically restore
> the same state at all. The "restore" part is generally much better seen as
> a "reset hardware" than a "resume" thing. Because we literally cannot
> trust *anything* about the state since we froze it - we might have booted
> a different OS in between etc. Very different from suspend/resume.
Yeah, definitely. It has to be much more robust and deal with configuration
changes, etc. (within reason).
Jesse
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists