[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+CK2bB6GiivEhHYUg8roSug_cAnPBHJTc=J13nS+7iRJD7rTg@mail.gmail.com>
Date: Wed, 15 Oct 2025 08:46:28 -0400
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: akpm@...ux-foundation.org, brauner@...nel.org, corbet@....net,
graf@...zon.com, jgg@...pe.ca, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-mm@...ck.org, masahiroy@...nel.org,
ojeda@...nel.org, pratyush@...nel.org, rdunlap@...radead.org, tj@...nel.org,
jasonmiu@...gle.com, dmatlack@...gle.com, skhawaja@...gle.com
Subject: Re: [PATCH 2/2] liveupdate: kho: allocate metadata directly from the
buddy allocator
On Wed, Oct 15, 2025 at 4:37 AM Mike Rapoport <rppt@...nel.org> wrote:
>
> On Wed, Oct 15, 2025 at 01:31:21AM -0400, Pasha Tatashin wrote:
> > KHO allocates metadata for its preserved memory map using the SLUB
> > allocator via kzalloc(). This metadata is temporary and is used by the
> > next kernel during early boot to find preserved memory.
> >
> > A problem arises when KFENCE is enabled. kzalloc() calls can be
> > randomly intercepted by kfence_alloc(), which services the allocation
> > from a dedicated KFENCE memory pool. This pool is allocated early in
> > boot via memblock.
> >
> > When booting via KHO, the memblock allocator is restricted to a "scratch
> > area", forcing the KFENCE pool to be allocated within it. This creates a
> > conflict, as the scratch area is expected to be ephemeral and
> > overwriteable by a subsequent kexec. If KHO metadata is placed in this
> > KFENCE pool, it leads to memory corruption when the next kernel is
> > loaded.
> >
> > To fix this, modify KHO to allocate its metadata directly from the buddy
> > allocator instead of SLUB.
> >
> > As part of this change, the metadata bitmap size is increased from 512
> > bytes to PAGE_SIZE to align with the page-based allocations from the
> > buddy system.
> >
> > Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> > Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
> > ---
> > kernel/liveupdate/kexec_handover.c | 23 +++++++++++++----------
> > 1 file changed, 13 insertions(+), 10 deletions(-)
> >
> > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> > index ef1e6f7a234b..519de6d68b27 100644
> > --- a/kernel/liveupdate/kexec_handover.c
> > +++ b/kernel/liveupdate/kexec_handover.c
> > @@ -66,10 +66,10 @@ early_param("kho", kho_parse_enable);
> > * Keep track of memory that is to be preserved across KHO.
> > *
> > * The serializing side uses two levels of xarrays to manage chunks of per-order
> > - * 512 byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order of a
> > - * 1TB system would fit inside a single 512 byte bitmap. For order 0 allocations
> > - * each bitmap will cover 16M of address space. Thus, for 16G of memory at most
> > - * 512K of bitmap memory will be needed for order 0.
> > + * PAGE_SIZE byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order
> > + * of a 8TB system would fit inside a single 4096 byte bitmap. For order 0
> > + * allocations each bitmap will cover 128M of address space. Thus, for 16G of
> > + * memory at most 512K of bitmap memory will be needed for order 0.
> > *
> > * This approach is fully incremental, as the serialization progresses folios
> > * can continue be aggregated to the tracker. The final step, immediately prior
> > @@ -77,7 +77,7 @@ early_param("kho", kho_parse_enable);
> > * successor kernel to parse.
> > */
> >
> > -#define PRESERVE_BITS (512 * 8)
> > +#define PRESERVE_BITS (PAGE_SIZE * 8)
> >
> > struct kho_mem_phys_bits {
> > DECLARE_BITMAP(preserve, PRESERVE_BITS);
> > @@ -131,18 +131,21 @@ static struct kho_out kho_out = {
> >
> > static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
>
> The name 'xa_load_or_alloc' is confusing now that we only use this function
Indeed, but this is not something that is done by this patch
> to allocate bitmaps. I think it should be renamed to reflect that and it's
> return type should be 'struct kho_mem_phys_bits'. Then it wouldn't need sz
> parameter and the size calculations below become redundant.
I am thinking of splitting from this patch increase of bitmap size to
PAGE_SIZE, and after that re-name this function, and remove size_t
argument in another patch, and finally the fix patch that replaces
slub with buddy.
>
> > {
> > + unsigned int order;
> > void *elm, *res;
> >
> > elm = xa_load(xa, index);
> > if (elm)
> > return elm;
> >
> > - elm = kzalloc(sz, GFP_KERNEL);
> > + order = get_order(sz);
> > + elm = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
> > if (!elm)
> > return ERR_PTR(-ENOMEM);
> >
> > - if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz))) {
> > - kfree(elm);
> > + if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm),
> > + PAGE_SIZE << order))) {
> > + free_pages((unsigned long)elm, order);
> > return ERR_PTR(-EINVAL);
> > }
> >
>
> --
> Sincerely yours,
> Mike.
Powered by blists - more mailing lists