[<prev] [next>] [day] [month] [year] [list]
Message-ID: <e0ed9fb9-8e7a-44ad-976a-27362f6e537a@amazon.de>
Date: Mon, 28 Aug 2023 22:50:19 +0200
From: Alexander Graf <graf@...zon.de>
To: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>,
"Gowans, James" <jgowans@...zon.com>
CC: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"rppt@...nel.org" <rppt@...nel.org>,
"madvenka@...ux.microsoft.com" <madvenka@...ux.microsoft.com>,
"anthony.yznaga@...cle.com" <anthony.yznaga@...cle.com>,
"steven.sistare@...cle.com" <steven.sistare@...cle.com>,
"stanislav.kinsburskii@...il.com" <stanislav.kinsburskii@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"seanjc@...gle.com" <seanjc@...gle.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"kys@...rosoft.com" <kys@...rosoft.com>,
"arnd@...db.de" <arnd@...db.de>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"anrayabh@...ux.microsoft.com" <anrayabh@...ux.microsoft.com>,
"dragan.cvetic@....com" <dragan.cvetic@....com>,
"jinankjain@...ux.microsoft.com" <jinankjain@...ux.microsoft.com>,
"derek.kiernan@....com" <derek.kiernan@....com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Andrew Morton" <akpm@...ux-foundation.org>,
<kexec@...ts.infradead.org>, <iommu@...ts.linux.dev>,
kvm <kvm@...r.kernel.org>
Subject: Re: [RFC PATCH] Introduce persistent memory pool
+kexec, iommu, kvm
On 23.08.23 04:45, Stanislav Kinsburskii wrote:
>
> +akpm, +linux-mm
>
> On Fri, Aug 25, 2023 at 01:32:40PM +0000, Gowans, James wrote:
>> On Fri, 2023-08-25 at 10:05 +0200, Greg Kroah-Hartman wrote:
>>
>> Thanks for adding me to this thread Greg!
>>
>>> On Tue, Aug 22, 2023 at 11:34:34AM -0700, Stanislav Kinsburskii wrote:
>>>> This patch addresses the need for a memory allocator dedicated to
>>>> persistent memory within the kernel. This allocator will preserve
>>>> kernel-specific states like DMA passthrough device states, IOMMU state, and
>>>> more across kexec.
>>>> The proposed solution offers a foundational implementation for potential
>>>> custom solutions that might follow. Though the implementation is
>>>> intentionally kept concise and straightforward to foster discussion and
>>>> feedback, it's fully functional in its current state.
>> Hi Stanislav, it looks like we're working on similar things. I'm looking
>> to develop a mechanism to support hypervisor live update for when KVM is
>> running VMs with PCI device passthrough. VMs with device passthrough
>> also necessitates passing and re-hydrating IOMMU state so that DMA can
>> continue during live update.
>>
>> Planning on having an LPC session on this topic:
>> https://lpc.events/event/17/abstracts/1629/ (currently it's only a
>> submitted abstract so not sure if visible, hopefully it will be soon).
>>
>> We are looking at implementing persistence across kexec via an in-memory
>> filesystem on top of reserved memory. This would have files for anything
>> that needs to be persisted. That includes files for IOMMU pgtables, for
>> guest memory or userspace-accessible memory.
>>
>> It may be nice to solve all kexec persistence requirements with one
>> solution, but we can consider IOMMU separately. There are at least three
>> ways that this can be done:
>> a) carving out reserved memory for pgtables. This is done by your
>> proposal here, as well as my suggestion of a filesystem.
>> b) pre/post kexec hooks for drivers to serialise state and pass it
>> across in a structured format from old to new kernel.
>> c) Reconstructing IOMMU state in the new kernel by starting at the
>> hardware registers and walking the page tables. No state passing needed.
>>
>> Have you considered option (b) and (c) here? One of the implications of
>> (b) and (c) are that they would need to hook into the buddy allocator
>> really early to be able to carve out the reconstructed page tables
>> before the allocator is used. Similar to how pkram [0] hooks in early to
>> carve out pages used for its filesystem.
>>
> Hi James,
>
> We are indeed working on similar things, so thanks for chiming in.
> I've seen pkram proposal as well as your comments there.
>
> I think (b) will need some persistent-over-kexec memory to pass the
> state across kexec as well as some key-value store persisted as well.
> And the proposed persistent memory pool is aimed exactly for this
> purpose.
> Or do you imply some other way to pass driver's data accross kexec?
If I had to build this, I'd probably do it just like device tree passing
on ARM. It's a single, physically contiguous blob of data whose entry
point you pass to the target kernel. IIRC ACPI passing works similarly.
This would just be one more opaque data structure that then needs very
strict versioning and forward/backward compat guarantees.
> I dind't consider (c) yet, thanks for for the pointer.
>
> I have a question in this scope: how is PCI devices registers state is persisted
> across kexec with the files system you are working on? I.e. how does
> driver know, that the device shouldn't not be reinitialized?
The easiest way to do it initially would be kernel command line options
that hack up the drivers. But I suppose depending on the option we go
with, you can also use the respective "natural" path:
(a) A special metadata file that explains the state to the driver
(b) An entry in the structured file format that explains the state to
the target driver
(c) Compatible target drivers try to enumerate state from the target
device's register file
>
>>>> Potential applications include:
>>>>
>>>> 1. Allowing various in-kernel entities to allocate persistent pages from
>>>> a singular memory pool, eliminating the need for multiple region
>>>> reservations.
>>>>
>>>> 2. For in-kernel components that require the allocation address to be
>>>> available on kernel kexec, this address can be exposed to user space and
>>>> then passed via the command line.
>> Do you have specific examples of other state that needs to be passed
>> across? Trying to see whether tailoring specifically to the IOMMU case
>> is okay. Conceptually IOMMU state can be reconstructed starting with
>> hardware registers, not needing reserved memory. Other use-cases may not
>> have this option.
>>
> Well, basically it's IOMMU state and PCI devices to skip/avoid
> initializing.
> I bet there can be other misc (and unrelated things) like persistent
> filesystems, block devices, etc. But I don't have a solid set of use
> cases to present.
Would be great if you could think through the problem space until LPC so
we can have a solid conversation there :)
>
>>> As you have no in-kernel users of this, it's not something we can even
>>> consider at the moment for obvious reasons (neither would you want us
>>> to.)
>>>
>>> Can you make this part of a patch series that actually adds a user,
>>> probably more than one, so that we can see if any of this even makes
>>> sense?
>> I'm very keen to see this as well. The way that the IOMMU drivers are
>> enlightened to hook into your memory pool will likely be similar to how
>> they would hook into my proposal of an in-memory filesystem.
>> Do you have code available showing the IOMMU integration?
>>
> No, I don't have such a code yet.
> But I was thinking that using such a allocator in the mempool allows
> to hide this implementation under the hood of an existent generic
> mechanism, which is then can be used to create persistent objects (file
> system, for example) on top of it.
Unfortunately it's practically impossible to have a solid conversation
on generic mechanisms without actual users to see how they fit in with
the real world. That's Greg's answer to your patch set and I tend to
agree. What if (b) or (c) turn out much more viable? Then we've wasted a
lot of effort in shaping up the allocator for no good reason.
>
>>>> drivers/misc/Kconfig | 7 +
>>>> drivers/misc/Makefile | 1
>>>> drivers/misc/pmpool.c | 270 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>> include/linux/pmpool.h | 20 ++++
>>>> 4 files changed, 298 insertions(+)
>>>> create mode 100644 drivers/misc/pmpool.c
>>>> create mode 100644 include/linux/pmpool.h
>>> misc is not for memory pools, as this is not a driver. please put this
>>> in the properly location instead of trying to hide it from the mm
>>> maintainers and subsystem :)
>> One of the reasons I thought a proper filesystem would be a better way
>> of exposing this functionality.
>>
> Yes, I see the point of having a file system for the goals you are
> targeting. It looks like the right way forward.
>
> What I'm not sure about is that persistent-over-kexec memory management and
> the actual preservation mechanism should be an embedded part of this file
> system.
>
> What I'm trying to propose is a part of a generic mechanism to provide
> state persistence across kexec, which can then be used by your file
> system or something else.
>
> What do you think about this approach?
IMHO we need to at least prototype each of the paths outlined above to
be able to create a sense for what works the best. I can see problems
with all of them - and I'm personally not convinced there will be a
one-size-fits-all solution yet.
That said, I am happy to see you pursue similar paths to what we have in
mind. It means there is a real gap in functionality in Linux kexec that
we need to overcome sooner or later.
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Powered by blists - more mailing lists