[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66d7eda2-c136-1245-b433-784264b31683@oracle.com>
Date: Wed, 31 May 2023 16:14:10 -0700
From: Anthony Yznaga <anthony.yznaga@...cle.com>
To: "Gowans, James" <jgowans@...zon.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: "kexec@...ts.infradead.org" <kexec@...ts.infradead.org>,
"jason.zeng@...el.com" <jason.zeng@...el.com>,
"keescook@...omium.org" <keescook@...omium.org>,
"lei.l.li@...el.com" <lei.l.li@...el.com>,
"luto@...nel.org" <luto@...nel.org>,
"rppt@...nel.org" <rppt@...nel.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"steven.sistare@...cle.com" <steven.sistare@...cle.com>,
"Graf (AWS), Alexander" <graf@...zon.de>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"mgalaxy@...mai.com" <mgalaxy@...mai.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"fam.zheng@...edance.com" <fam.zheng@...edance.com>,
"Woodhouse, David" <dwmw@...zon.co.uk>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"yuleixzhang@...cent.com" <yuleixzhang@...cent.com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"hpa@...or.com" <hpa@...or.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [RFC v3 00/21] Preserved-over-Kexec RAM
On 5/26/23 6:57 AM, Gowans, James wrote:
> On Wed, 2023-04-26 at 17:08 -0700, Anthony Yznaga wrote:
>> Sending out this RFC in part to guage community interest.
>> This patchset implements preserved-over-kexec memory storage or PKRAM as a
>> method for saving memory pages of the currently executing kernel so that
>> they may be restored after kexec into a new kernel. The patches are adapted
>> from an RFC patchset sent out in 2013 by Vladimir Davydov [1]. They
>> introduce the PKRAM kernel API.
>>
>> One use case for PKRAM is preserving guest memory and/or auxillary
>> supporting data (e.g. iommu data) across kexec to support reboot of the
>> host with minimal disruption to the guest.
> Hi Anthony,
Hi James,
Thank you for looking at this.
>
> Thanks for re-posting this - I'm been wanting to re-kindle the discussion
> on preserving memory across kexec for a while now.
>
> There are a few aspects at play in this space of memory management
> designed specifically for the virtualisation and live update (kexec) use-
> case which I think we should consider:
>
> 1. Preserving userspace-accessible memory across kexec: this is what pkram
> addresses.
>
> 2. Preserving kernel state: This would include memory required for kexec
> with DMA passthrough devices, like IOMMU root page and page tables, DMA-
> able buffers for drivers, etc. Also certain structures for improved kernel
> boot performance after kexec, like a PCI device cache, clock LPJ and
> possible others, sort of what Xen breadcrumbs [0] achieves. The pkram RFC
> indicates that this should be possible, though IMO this could be more
> straight forward to do with a new filesystem with first-class support for
> kernel persistence via something like inode types for kernel data.
PKRAM as it is now can preserve kernel data by streaming bytes to a
PKRAM object, but the data must be location independent since the data
is stored in allocated 4k pages rather than being preserved in place
This really isn't usable for things like page tables or memory expected
not to move because of DMA, etc.
One issue with preserving non-relocatable, regular memory that is not
partitioned from the kernel is the risk that a kexec kernel has already
been loaded and that its pre-computed destination where it will be copied
to on reboot will overwrite the preserved memory. Either some way of
re-processing the kexec kernel to load somewhere else would be needed,
or kexec load would need to be restricted from loading where memory
might be preserved. Plusses for a partitioning approach.
>
> 3. Ensuring huge/gigantic memory allocations: to improve the TLB perf of
> 2-stage translations it's beneficial to allocate guest memory in large
> contiguous blocks, preferably PUD-level blocks for multi-GiB guests. If
> the buddy allocator is used this may be a challenge both from an
> implementation and a fragmentation perspective, and it may be desirable to
> have stronger guarantees about allocation sizes.
Agreed that guaranteeing large blocks and fragmentation are issues for
PKRAM. One possible avenue to address this could be to support preserving
hugetlb pages.
>
> 4. Removing struct page overhead: When doing the huge/gigantic
> allocations, in generally it won't be necessary to have 4 KiB struct
> pages. This is something with dmemfs [1, 2] tries to achieve by using a
> large chunk of reserved memory and managing that by a new filesystem.
Has using DAX been considered? Not familiar with dmemfs but it sounds
functionally similar.
>
> 5. More "advanced" memory management APIs/ioctls for virtualisation: Being
> able to support things like DMA-driven post-copy live migration, memory
> oversubscription, carving out chunks of memory from a VM to launch side-
> car VMs, more fine-grain control of IOMMU or MMU permissions, etc. This
> may be easier to achieve with a new filesystem, rather than coupling to
> tempfs semantics and ioctls.
>
> Overall, with the above in mind, my take is that we may have a smoother
> path to implement a more comprehensive solution by going the route of a
> new purpose-built file system on top of reserved memory. Sort of like
> dmemfs with persistence and specifically support for kernel persistence.
>
> Does my take here make sense?
Yes, I believe so. There are some serious issues with PKRAM to address
before it could be truly viable (fragmentation, relocation, etc), so
a memory partitioning approach might be the way to go.
>
> I'm hoping to put together an RFC for something like the above (dmemfs
> with persistence) soon, focusing on how the IOMMU persistence will work.
> This is an important differentiating factor to cover in the RFC, IMO.
Great! I'll keep an eye out for it.
Anthony
>
>> PKRAM provides a flexible way
>> for doing this without requiring that the amount of memory used by a fixed
>> size created a prior.
> AFAICT the main down-side of what I'm suggesting here compared to pkram,
> is that as you say here: pkram doesn't require the up-front reserving of
> memory - allocations from the global shared pool are dynamic. I'm on the
> fence as to whether this is actually a desirable property though. Carving
> out a large chunk of system memory as reserved memory for a persisted
> filesystem (as I'm suggesting) has the advantages of removing struct page
> overhead, providing better guarantees about huge/gigantic page
> allocations, and probably makes the kexec restore path simpler and more
> self contained.
>
> I think there's an argument to be made that having a clearly-defined large
> range of memory which is persisted, and the rest is normal "ephemeral"
> kernel memory may be preferable.
>
> Keen to hear your (and others) thoughts!
>
> JG
>
> [0] http://david.woodhou.se/live-update-handover.pdf
> [1] https://lwn.net/Articles/839216/
> [2] https://lkml.org/lkml/2020/12/7/342
Powered by blists - more mailing lists