[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f1e139d-cd6f-438f-8b5f-be356314d9aa@kernel.org>
Date: Mon, 17 Nov 2025 18:12:05 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Jiaqi Yan <jiaqiyan@...gle.com>, nao.horiguchi@...il.com,
linmiaohe@...wei.com, ziy@...dia.com
Cc: lorenzo.stoakes@...cle.com, william.roche@...cle.com,
harry.yoo@...cle.com, tony.luck@...el.com, wangkefeng.wang@...wei.com,
willy@...radead.org, jane.chu@...cle.com, akpm@...ux-foundation.org,
osalvador@...e.de, muchun.song@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v1 1/2] mm/huge_memory: introduce
uniform_split_unmapped_folio_to_zero_order
On 16.11.25 02:47, Jiaqi Yan wrote:
> When freeing a high-order folio that contains HWPoison pages,
> to ensure these HWPoison pages are not added to buddy allocator,
> we can first uniformly split a free and unmapped high-order folio
> to 0-order folios first, then only add non-HWPoison folios to
> buddy allocator and exclude HWPoison ones.
>
> Introduce uniform_split_unmapped_folio_to_zero_order, a wrapper
> to the existing __split_unmapped_folio. Caller can use it to
> uniformly split an unmapped high-order folio into 0-order folios.
>
> No functional change. It will be used in a subsequent commit.
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
> ---
> include/linux/huge_mm.h | 6 ++++++
> mm/huge_memory.c | 8 ++++++++
> 2 files changed, 14 insertions(+)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 71ac78b9f834f..ef6a84973e157 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -365,6 +365,7 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
> vm_flags_t vm_flags);
>
> bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
> +int uniform_split_unmapped_folio_to_zero_order(struct folio *folio);
> int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> unsigned int new_order);
> int min_order_for_split(struct folio *folio);
> @@ -569,6 +570,11 @@ can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> {
> return false;
> }
> +static inline int uniform_split_unmapped_folio_to_zero_order(struct folio *folio)
> +{
> + VM_WARN_ON_ONCE_PAGE(1, page);
> + return -EINVAL;
> +}
IIUC this patch won't be required (I agree that ideally the page
allocator takes care of this), but for the future, let's consistently
name these things "folio_split_XXX".
--
Cheers
David
Powered by blists - more mailing lists