[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8f9d81a8-1071-43ca-98cd-e9c1eab8e014@amazon.de>
Date: Tue, 17 Oct 2023 10:31:27 +0200
From: Alexander Graf <graf@...zon.de>
To: <madvenka@...ux.microsoft.com>, <gregkh@...uxfoundation.org>,
<pbonzini@...hat.com>, <rppt@...nel.org>, <jgowans@...zon.com>,
<arnd@...db.de>, <keescook@...omium.org>,
<stanislav.kinsburskii@...il.com>, <anthony.yznaga@...cle.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<jamorris@...ux.microsoft.com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
kvm <kvm@...r.kernel.org>
Subject: Re: [RFC PATCH v1 00/10] mm/prmem: Implement the
Persistent-Across-Kexec memory feature (prmem)
Hey Madhavan!
This patch set looks super exciting - thanks a lot for putting it
together. We've been poking at a very similar direction for a while as
well and will discuss the fundamental problem of how to persist kernel
metadata across kexec at LPC:
https://lpc.events/event/17/contributions/1485/
It would be great to have you in the room as well then.
Some more comments inline.
On 17.10.23 01:32, madvenka@...ux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>
>
> Introduction
> ============
>
> This feature can be used to persist kernel and user data across kexec reboots
> in RAM for various uses. E.g., persisting:
>
> - cached data. E.g., database caches.
> - state. E.g., KVM guest states.
> - historical information since the last cold boot. E.g., events, logs
> and journals.
> - measurements for integrity checks on the next boot.
> - driver data.
> - IOMMU mappings.
> - MMIO config information.
>
> This is useful on systems where there is no non-volatile storage or
> non-volatile storage is too small or too slow.
This is useful in more situations. We for example need it to do a kexec
while a virtual machine is in suspended state, but has IOMMU mappings
intact (Live Update). For that, we need to ensure DMA can still reach
the VM memory and that everything gets reassembled identically and
without interruptions on the receiving end.
> The following sections describe the implementation.
>
> I have enhanced the ram disk block device driver to provide persistent ram
> disks on which any filesystem can be created. This is for persisting user data.
> I have also implemented DAX support for the persistent ram disks.
This is probably the least interesting of the enablements, right? You
can already today reserve RAM on boot as DAX block device and use it for
that purpose.
> I am also working on making ZRAM persistent.
>
> I have also briefly discussed the following use cases:
>
> - Persisting IOMMU mappings
> - Remembering DMA pages
> - Reserving pages that encounter memory errors
> - Remembering IMA measurements for integrity checks
> - Remembering MMIO config info
> - Implementing prmemfs (special filesystem tailored for persistence)
>
> Allocate metadata
> =================
>
> Define a metadata structure to store all persistent memory related information.
> The metadata fits into one page. On a cold boot, allocate and initialize the
> metadata page.
>
> Allocate data
> =============
>
> On a cold boot, allocate some memory for storing persistent data. Call it
> persistent memory. Specify the size in a command line parameter:
>
> prmem=size[KMG][,max_size[KMG]]
>
> size Initial amount of memory allocated to prmem during boot
> max_size Maximum amount of memory that can be allocated to prmem
>
> When the initial memory is exhaused via allocations, expand prmem dynamically
> up to max_size. Expansion is done by allocating from the buddy allocator.
> Record all allocations in the metadata.
I don't understand why we need a separate allocator. Why can't we just
use normal Linux allocations and serialize their location for handover?
We would obviously still need to find a large contiguous piece of memory
for the target kernel to bootstrap itself into until it can read which
pages it can and can not use, but we can do that allocation in the
source environment using CMA, no?
What I'm trying to say is: I think we're better off separating the
handover mechanism from the allocation mechanism. If we can implement
handover without a new allocator, we can use it for simple things with a
slight runtime penalty. To accelerate the handover then, we can later
add a compacting allocator that can use the handover mechanism we
already built to persist itself.
I have a WIP branch where I'm toying with such a handover mechanism that
uses device tree to serialize/deserialize state. By standardizing the
property naming, we can in the receiving kernel mark all persistent
allocations as reserved and then slowly either free them again or mark
them as in-use one by one:
https://github.com/agraf/linux/commit/fd5736a21d549a9a86c178c91acb29ed7f364f42
I used ftrace as example payload to persist: With the handover mechanism
in place, we serialize/deserialize ftrace ring buffer metadata and are
thus able to read traces of the previous system after kexec. This way,
you can for example profile the kexec exit path.
It's not even in RFC state yet, there are a few things where I would
need a couple days to think hard about data structures, layouts and
other problems :). But I believe from the patch you get the idea.
One such user of kho could be a new allocator like prmem and each
subsystem's serialization code could choose to rely on the prmem
subsystem to persist data instead of doing it themselves. That way you
get a very non-intrusive enablement path for kexec handover, easily
amendable data structures that can change compatibly over time as well
as the ability to recreate ephemeral data structure based on persistent
information - which will be necessary to persist VFIO containers.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Powered by blists - more mailing lists