[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51F404D0.6070004@parallels.com>
Date: Sat, 27 Jul 2013 21:35:12 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Marco Stornelli <marco.stornelli@...il.com>
CC: <linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<linux-mm@...ck.org>, <criu@...nvz.org>, <devel@...nvz.org>,
<xemul@...allels.com>
Subject: Re: [PATCH RFC] pram: persistent over-kexec memory file system
On 07/27/2013 07:41 PM, Marco Stornelli wrote:
> Il 26/07/2013 14:29, Vladimir Davydov ha scritto:
>> Hi,
>>
>> We want to propose a way to upgrade a kernel on a machine without
>> restarting all the user-space services. This is to be done with CRIU
>> project, but we need help from the kernel to preserve some data in
>> memory while doing kexec.
>>
>> The key point of our implementation is leaving process memory in-place
>> during reboot. This should eliminate most io operations the services
>> would produce during initialization. To achieve this, we have
>> implemented a pseudo file system that preserves its content during
>> kexec. We propose saving CRIU dump files to this file system, kexec'ing
>> and then restoring the processes in the newly booted kernel.
>>
>
> http://pramfs.sourceforge.net/
AFAIU it's a bit different thing: PRAMFS as well as pstore, which has
already been merged, requires hardware support for over-reboot
persistency, so called non-volatile RAM, i.e. RAM which is not directly
accessible and so is not used by the kernel. On the contrary, what we'd
like to have is preserving usual RAM on kexec. It is possible, because
RAM is not reset during kexec. This would allow leaving applications
working set as well as filesystem caches in place, speeding the reboot
process as a whole and reducing the downtime significantly.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists