[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251018171756.1724191-11-pasha.tatashin@soleen.com>
Date: Sat, 18 Oct 2025 13:17:56 -0400
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: akpm@...ux-foundation.org,
brauner@...nel.org,
corbet@....net,
graf@...zon.com,
jgg@...pe.ca,
linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org,
linux-mm@...ck.org,
masahiroy@...nel.org,
ojeda@...nel.org,
pasha.tatashin@...een.com,
pratyush@...nel.org,
rdunlap@...radead.org,
rppt@...nel.org,
tj@...nel.org,
jasonmiu@...gle.com,
dmatlack@...gle.com,
skhawaja@...gle.com
Subject: [PATCH v6 10/10] liveupdate: kho: allocate metadata directly from the buddy allocator
KHO allocates metadata for its preserved memory map using the slab
allocator via kzalloc(). This metadata is temporary and is used by the
next kernel during early boot to find preserved memory.
A problem arises when KFENCE is enabled. kzalloc() calls can be
randomly intercepted by kfence_alloc(), which services the allocation
from a dedicated KFENCE memory pool. This pool is allocated early in
boot via memblock.
When booting via KHO, the memblock allocator is restricted to a "scratch
area", forcing the KFENCE pool to be allocated within it. This creates a
conflict, as the scratch area is expected to be ephemeral and
overwriteable by a subsequent kexec. If KHO metadata is placed in this
KFENCE pool, it leads to memory corruption when the next kernel is
loaded.
To fix this, modify KHO to allocate its metadata directly from the buddy
allocator instead of slab.
Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
Reviewed-by: Pratyush Yadav <pratyush@...nel.org>
---
kernel/liveupdate/kexec_handover.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index 7c8e89a6b953..92662739a3a2 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -132,6 +132,8 @@ static struct kho_out kho_out = {
.finalized = false,
};
+DEFINE_FREE(kho_free_page, void *, free_page((unsigned long)_T))
+
static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
{
void *res = xa_load(xa, index);
@@ -139,7 +141,7 @@ static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
if (res)
return res;
- void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
+ void *elm __free(kho_free_page) = (void *)get_zeroed_page(GFP_KERNEL);
if (!elm)
return ERR_PTR(-ENOMEM);
@@ -352,9 +354,9 @@ static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE);
static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk,
unsigned long order)
{
- struct khoser_mem_chunk *chunk __free(kfree) = NULL;
+ struct khoser_mem_chunk *chunk __free(kho_free_page) = NULL;
- chunk = kzalloc(PAGE_SIZE, GFP_KERNEL);
+ chunk = (void *)get_zeroed_page(GFP_KERNEL);
if (!chunk)
return ERR_PTR(-ENOMEM);
--
2.51.0.915.g61a8936c21-goog
Powered by blists - more mailing lists