lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+CK2bBv2wpduYQF_fwzciH4HxZ6eFjwZMSpZwW0AC6KXL4msg@mail.gmail.com>
Date: Tue, 23 Dec 2025 12:49:39 -0500
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: Pratyush Yadav <pratyush@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Alexander Graf <graf@...zon.com>, 
	Mike Rapoport <rppt@...nel.org>, kexec@...ts.infradead.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kho: simplify page initialization in kho_restore_page()

On Tue, Dec 23, 2025 at 5:45 AM Pratyush Yadav <pratyush@...nel.org> wrote:
>
> When restoring a page (from kho_restore_pages()) or folio (from
> kho_restore_folio()), KHO must initialize the struct page. The
> initialization differs slightly depending on if a folio is requested or
> a set of 0-order pages is requested.
>
> Conceptually, it is quite simple to understand. When restoring 0-order
> pages, each page gets a refcount of 1 and that's it. When restoring a
> folio, head page gets a refcount of 1 and tail pages get 0.
>
> kho_restore_page() tries to combine the two separate initialization flow
> into one piece of code. While it works fine, it is more complicated to
> read than it needs to be. Make the code simpler by splitting the two
> initalization paths into two separate functions. This improves
> readability by clearly showing how each type must be initialized.
>
> Signed-off-by: Pratyush Yadav <pratyush@...nel.org>
> ---
>
> Notes:
>     This patch is a follow up from
>     https://lore.kernel.org/linux-mm/86ms42mj44.fsf@kernel.org/
>
>  kernel/liveupdate/kexec_handover.c | 41 ++++++++++++++++++++----------
>  1 file changed, 27 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index 2d9ce33c63dc..304c26fd5ee6 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -219,11 +219,33 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn,
>         return 0;
>  }
>
> +/* For physically contiguous 0-order pages. */
> +static void kho_init_pages(struct page *page, unsigned int nr_pages)

Here and in other places below, it is better for nr_pages to be
unsigned long. This is consistent with other places in mm, where we
have gradually moved on from int/unsigned int to unsigned long for
npages (see gup.c for example). Otherwise, LGTM.

> +{
> +       for (unsigned int i = 0; i < nr_pages; i++)
> +               set_page_count(page + i, 1);
> +}
> +
> +static void kho_init_folio(struct page *page, unsigned int order)
> +{
> +       unsigned int nr_pages = (1 << order);
> +
> +       /* Head page gets refcount of 1. */
> +       set_page_count(page, 1);
> +
> +       /* For higher order folios, tail pages get a page count of zero. */
> +       for (unsigned int i = 1; i < nr_pages; i++)
> +               set_page_count(page + i, 0);
> +
> +       if (order > 0)
> +               prep_compound_page(page, order);
> +}
> +
>  static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
>  {
>         struct page *page = pfn_to_online_page(PHYS_PFN(phys));
> -       unsigned int nr_pages, ref_cnt;
>         union kho_page_info info;
> +       unsigned int nr_pages;
>
>         if (!page)
>                 return NULL;
> @@ -240,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
>
>         /* Clear private to make sure later restores on this page error out. */
>         page->private = 0;
> -       /* Head page gets refcount of 1. */
> -       set_page_count(page, 1);
>
> -       /*
> -        * For higher order folios, tail pages get a page count of zero.
> -        * For physically contiguous order-0 pages every pages gets a page
> -        * count of 1
> -        */
> -       ref_cnt = is_folio ? 0 : 1;
> -       for (unsigned int i = 1; i < nr_pages; i++)
> -               set_page_count(page + i, ref_cnt);
> -
> -       if (is_folio && info.order)
> -               prep_compound_page(page, info.order);
> +       if (is_folio)
> +               kho_init_folio(page, info.order);
> +       else
> +               kho_init_pages(page, nr_pages);

Thanks,
Pasha

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ