[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aWXaiWI3v_PJOKDL@hyeyoo>
Date: Tue, 13 Jan 2026 14:39:21 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Jiaqi Yan <jiaqiyan@...gle.com>
Cc: jackmanb@...gle.com, hannes@...xchg.org, linmiaohe@...wei.com,
ziy@...dia.com, willy@...radead.org, nao.horiguchi@...il.com,
david@...hat.com, lorenzo.stoakes@...cle.com, william.roche@...cle.com,
tony.luck@...el.com, wangkefeng.wang@...wei.com, jane.chu@...cle.com,
akpm@...ux-foundation.org, osalvador@...e.de, muchun.song@...ux.dev,
rientjes@...gle.com, duenwen@...gle.com, jthoughton@...gle.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com, mhocko@...e.com
Subject: Re: [PATCH v3 2/3] mm/page_alloc: only free healthy pages in
high-order has_hwpoisoned folio
On Mon, Jan 12, 2026 at 12:49:22AM +0000, Jiaqi Yan wrote:
> At the end of dissolve_free_hugetlb_folio(), a free HugeTLB folio
> becomes non-HugeTLB, and it is released to buddy allocator
> as a high-order folio, e.g. a folio that contains 262144 pages
> if the folio was a 1G HugeTLB hugepage.
>
> This is problematic if the HugeTLB hugepage contained HWPoison
> subpages. In that case, since buddy allocator does not check
> HWPoison for non-zero-order folio, the raw HWPoison page can
> be given out with its buddy page and be re-used by either
> kernel or userspace.
>
> Memory failure recovery (MFR) in kernel does attempt to take
> raw HWPoison page off buddy allocator after
> dissolve_free_hugetlb_folio(). However, there is always a time
> window between dissolve_free_hugetlb_folio() frees a HWPoison
> high-order folio to buddy allocator and MFR takes HWPoison
> raw page off buddy allocator.
I wonder if this is something we want to backport to -stable.
> One obvious way to avoid this problem is to add page sanity
> checks in page allocate or free path. However, it is against
> the past efforts to reduce sanity check overhead [1,2,3].
>
> Introduce free_has_hwpoisoned() to only free the healthy pages
> and to exclude the HWPoison ones in the high-order folio.
> The idea is to iterate through the sub-pages of the folio to
> identify contiguous ranges of healthy pages. Instead of freeing
> pages one by one, decompose healthy ranges into the largest
> possible blocks having different orders. Every block meets the
> requirements to be freed via __free_one_page().
>
> free_has_hwpoisoned() has linear time complexity wrt the number
> of pages in the folio. While the power-of-two decomposition
> ensures that the number of calls to the buddy allocator is
> logarithmic for each contiguous healthy range, the mandatory
> linear scan of pages to identify PageHWPoison() defines the
> overall time complexity. For a 1G hugepage having several
> HWPoison pages, free_has_hwpoisoned() takes around 2ms on
> average.
>
> Since free_has_hwpoisoned() has nontrivial overhead, it is
> wrapped inside free_pages_prepare_has_hwpoisoned() and done
> only PG_has_hwpoisoned indicates HWPoison page exists and
> after free_pages_prepare() succeeded.
>
> [1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net
> [2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net
> [3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
>
> ---
> mm/page_alloc.c | 157 +++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 154 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 822e05f1a9646..9393589118604 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2923,6 +2928,152 @@ static bool free_frozen_page_commit(struct zone *zone,
> return ret;
> }
>From correctness point of view I think it looks good to me.
Let's see what the page allocator folks say.
A few nits below.
> +static bool compound_has_hwpoisoned(struct page *page, unsigned int order)
> +{
> + if (order == 0 || !PageCompound(page))
> + return false;
nit: since order-0 compound page is not a thing,
!PageCompound(page) check should cover order == 0 case.
> + return folio_test_has_hwpoisoned(page_folio(page));
> +}
> +
> +/*
> + * Do free_has_hwpoisoned() when needed after free_pages_prepare().
> + * Returns
> + * - true: free_pages_prepare() is good and caller can proceed freeing.
> + * - false: caller should not free pages for one of the two reasons:
> + * 1. free_pages_prepare() failed so it is not safe to proceed freeing.
> + * 2. this is a compound page having some HWPoison pages, and healthy
> + * pages are already safely freed.
> + */
> +static bool free_pages_prepare_has_hwpoisoned(struct page *page,
> + unsigned int order,
> + fpi_t fpi_flags)
nit: Hope we'll come up with a better name than
free_pages_prepare_has_poisoned(), but I don't have any better
suggestion... :)
And I hope somebody familiar with compaction (as compaction_free() calls
free_pages_prepare() and ignores its return value) could confirm that
it is safe to do a compound_has_hwpoisoned() check and, when it returns
true, call free_has_hwpoisoned() in free_pages_prepare(),
so that we won't need a separate function to do this.
--
Cheers,
Harry / Hyeonggon
Powered by blists - more mailing lists