[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100527183041.0487bdf8@lxorguk.ukuu.org.uk>
Date: Thu, 27 May 2010 18:30:41 +0100
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Matthew Garrett <mjg59@...f.ucam.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Arve Hjønnevåg
<arve@...roid.com>, Florian Mickler <florian@...kler.org>,
Vitaly Wool <vitalywool@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Paul@...p1.linux-foundation.org, felipe.balbi@...ia.com,
Linux OMAP Mailing List <linux-omap@...r.kernel.org>,
Linux PM <linux-pm@...ts.linux-foundation.org>
Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)
> > Opportunistic suspend is just a deep idle state, nothing else.
>
> No. The useful property of opportunistic suspend is that nothing gets
> scheduled. That's fundamentally different to a deep idle state.
Nothing gets scheduled in a deep idle state either - its idle. We leave
the idle state to schedule anything.
I believe the constraint is
- Do not auto-enter a state for which you cannot maintain the devices in
use "properly".
On a current PC that generally means 'not suspend', on a lot of embedded
boards (including Android phones) it includes an opportunistic 'suspend'
and also several states half way between the PC deepest idles and suspend.
> > Stop thinking about suspend as a special mechanism. It's not - except
> > for s2disk, which is an entirely different beast.
>
> On PCs, suspend has more in common with s2disk than it does C states.
Todays PCs are a special case. More to the point I don't think anyone is
expected opportunistic suspend to be useful on _todays_ x86 systems.
Even on todays PCs your assumption is questionable for virtual machines
where a VM suspend is a lot faster and rather useful.
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists