[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zm36msvp.fsf@jbms.ath.cx>
Date: Mon, 11 Jun 2007 11:07:54 -0400
From: Jeremy Maitin-Shepard <jbms@....edu>
To: Pavel Machek <pavel@....cz>
Cc: Jeremy Maitin-Shepard <jmaitins@...rew.cmu.edu>,
"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...l.org>,
Nigel Cunningham <nigel@...el.suspend2.net>
Subject: Re: A kexec approach to hibernation
Pavel Machek <pavel@....cz> writes:
[snip]
>> > If _I_ were willing to add some runtime overhead to make hibernation
>> > simpler, I'd just use some virtualization to do that... with added
>> > advantage of "hibernate here, resume on different hw".
>>
>> I don't believe there is going to be any runtime overhead.
> 64MB less memory seems like runtime overhead for me. If you know how
> to do kexec without pre-reserving memory, I believe kexec/kdump team
> will be interested.
The main reason kdump needs to reserve memory at boot is that it needs
to preload the crashdump kernel into memory so that it will be available
on panic (and however much memory the crashdump kernel will need to run
will also need to be available at all times, since a panic can occur at
any time), and also because no attempt is made to shutdown devices on
panic, and consequently devices may clobber existing memory with ongoing
DMA, so a reserved area of memory must be used by the crashdump kernel.
For hibernate via kexec, however, these issues do not exist. The
simplest solution would be to simply backup the first say 16MB or 64MB
(or however much is desired for the "save" kernel to have) of memory
into free pages just before copying the "save" kernel into the desired
position and jumping to it.
Due to the speed of memory copying, this should not add any significant
overhead.
[snip]
--
Jeremy Maitin-Shepard
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists