[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0905251732310.18053@asgard>
Date: Mon, 25 May 2009 17:35:51 -0700 (PDT)
From: david@...g.hm
To: Oliver Neukum <oliver@...kum.org>
cc: nigel@...onice.net, Pavel Machek <pavel@....cz>,
Bartlomiej Zolnierkiewicz <bzolnier@...il.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
linux-pm@...ts.linux-foundation.org,
tuxonice-devel@...ts.tuxonice.net, linux-kernel@...r.kernel.org
Subject: Re: [TuxOnIce-devel] [RFC] TuxOnIce
On Tue, 26 May 2009, Oliver Neukum wrote:
> Am Montag, 25. Mai 2009 23:39:17 schrieb Nigel Cunningham:
>>> If there's not enough swap available, swsusp should freeze, realize
>>> there's no swap, unfreeze and continue. I do not see reliability
>>> problem there.
>>
>> If there's not enough storage available (I'm also thinking of the file
>> allocator Oliver wants), freeing some memory may get you in a position
>
> No, I do want a dedicated partition. Going to a filesystem is just hiding
> the problem. Filesystems can return -ENOSPC.
> I also want my sytem to reliably hibernate if the filesystem to hold
> the image happens to be remounted ro or to be undergoing a filesystem
> check.
>
> For full reliability you simply need a reservation. In addition that's
> the fastest solution, too. A simple linear write to an unfragmented
> area.
> The typical system today has three orders of magnitude more disk
> than ram. Do you really have a sytem you want to hibernate that has
> less than 2two orders of magnitude more disk than ram?
I actually have a couple of systems that have 128G of ram and 144G of
disk. and it can't take 3.5" drives (and I don't know if it's SAS
backplane can drive SATA drives, if so it can't take many of them) so the
'dives are cheap' answer may not work.
now, the question of if it makes sense to try and hibernate this system is
a very valid one.
David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists