[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1261425499.4328.10.camel@maxim-laptop>
Date: Mon, 21 Dec 2009 21:58:19 +0200
From: Maxim Levitsky <maximlevitsky@...il.com>
To: Nigel Cunningham <ncunningham@...a.org.au>
Cc: "Rafael J. Wysocki" <rjw@...k.pl>,
pm list <linux-pm@...ts.linux-foundation.org>,
ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
Dmitry Torokhov <dmitry.torokhov@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [linux-pm] [RFC] Asynchronous suspend/resume - test results
On Mon, 2009-12-21 at 18:25 +1100, Nigel Cunningham wrote:
> Hi.
>
> Rafael J. Wysocki wrote:
> > I'm not sure what the next step should be at this point. To me, the picture is
> > quite clear now, but perhaps we ought to run more tests on some other machines
> > or something. Please let me know what you think.
>
> Looks great. If you do decide you want other machines tested, I'll
> happily try the machines around here:
>
> - Dell M1530
> - Omnibook XE3-GF.
> - Via LN10000 based Mythtv Box
> - Desktop machine - Some Intel-based mobo I've forgotten the name of.
>
> I've been quietly following the thread, but have lost track of what
> iteration of the patch you're on, so if you'd give me a pointer to it,
> I'd be grateful.
>
> By the way, I haven't forgotten about sending some real patches for
> swsusp (ie more than just cleanups). I'm just busy with other things and
> also thinking carefully about what order to do things in.
I vote for ability to save all ram to the disk.
After several days of testing I managed to make my system do s2disk
really reliable (~300 cycles tested), but suspending when memory is
tight still fails sometimes.
I didn't yet use your tuxonice system though.
Best regards,
Maxim Levisky
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists