[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251223104448.195589-1-pratyush@kernel.org>
Date: Tue, 23 Dec 2025 11:44:46 +0100
From: Pratyush Yadav <pratyush@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Alexander Graf <graf@...zon.com>,
Mike Rapoport <rppt@...nel.org>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Pratyush Yadav <pratyush@...nel.org>
Cc: kexec@...ts.infradead.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] kho: simplify page initialization in kho_restore_page()
When restoring a page (from kho_restore_pages()) or folio (from
kho_restore_folio()), KHO must initialize the struct page. The
initialization differs slightly depending on if a folio is requested or
a set of 0-order pages is requested.
Conceptually, it is quite simple to understand. When restoring 0-order
pages, each page gets a refcount of 1 and that's it. When restoring a
folio, head page gets a refcount of 1 and tail pages get 0.
kho_restore_page() tries to combine the two separate initialization flow
into one piece of code. While it works fine, it is more complicated to
read than it needs to be. Make the code simpler by splitting the two
initalization paths into two separate functions. This improves
readability by clearly showing how each type must be initialized.
Signed-off-by: Pratyush Yadav <pratyush@...nel.org>
---
Notes:
This patch is a follow up from
https://lore.kernel.org/linux-mm/86ms42mj44.fsf@kernel.org/
kernel/liveupdate/kexec_handover.c | 41 ++++++++++++++++++++----------
1 file changed, 27 insertions(+), 14 deletions(-)
diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index 2d9ce33c63dc..304c26fd5ee6 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -219,11 +219,33 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn,
return 0;
}
+/* For physically contiguous 0-order pages. */
+static void kho_init_pages(struct page *page, unsigned int nr_pages)
+{
+ for (unsigned int i = 0; i < nr_pages; i++)
+ set_page_count(page + i, 1);
+}
+
+static void kho_init_folio(struct page *page, unsigned int order)
+{
+ unsigned int nr_pages = (1 << order);
+
+ /* Head page gets refcount of 1. */
+ set_page_count(page, 1);
+
+ /* For higher order folios, tail pages get a page count of zero. */
+ for (unsigned int i = 1; i < nr_pages; i++)
+ set_page_count(page + i, 0);
+
+ if (order > 0)
+ prep_compound_page(page, order);
+}
+
static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
{
struct page *page = pfn_to_online_page(PHYS_PFN(phys));
- unsigned int nr_pages, ref_cnt;
union kho_page_info info;
+ unsigned int nr_pages;
if (!page)
return NULL;
@@ -240,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
/* Clear private to make sure later restores on this page error out. */
page->private = 0;
- /* Head page gets refcount of 1. */
- set_page_count(page, 1);
- /*
- * For higher order folios, tail pages get a page count of zero.
- * For physically contiguous order-0 pages every pages gets a page
- * count of 1
- */
- ref_cnt = is_folio ? 0 : 1;
- for (unsigned int i = 1; i < nr_pages; i++)
- set_page_count(page + i, ref_cnt);
-
- if (is_folio && info.order)
- prep_compound_page(page, info.order);
+ if (is_folio)
+ kho_init_folio(page, info.order);
+ else
+ kho_init_pages(page, nr_pages);
adjust_managed_page_count(page, nr_pages);
return page;
base-commit: 9f7b37a7c250baf3092719d4ebc9a8edaa79a7b4
--
2.43.0
Powered by blists - more mailing lists