[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACw3F53x=n2mNJ6QS9wBRESim8ojHCmhY+-YLAL5N_wi6x0P4Q@mail.gmail.com>
Date: Mon, 17 Nov 2025 21:17:48 -0800
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: nao.horiguchi@...il.com, linmiaohe@...wei.com, ziy@...dia.com,
lorenzo.stoakes@...cle.com, william.roche@...cle.com, harry.yoo@...cle.com,
tony.luck@...el.com, wangkefeng.wang@...wei.com, willy@...radead.org,
jane.chu@...cle.com, akpm@...ux-foundation.org, osalvador@...e.de,
muchun.song@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v1 2/2] mm/memory-failure: avoid free HWPoison high-order folio
On Mon, Nov 17, 2025 at 9:15 AM David Hildenbrand (Red Hat)
<david@...nel.org> wrote:
>
> On 16.11.25 02:47, Jiaqi Yan wrote:
> > At the end of dissolve_free_hugetlb_folio, when a free HugeTLB
> > folio becomes non-HugeTLB, it is released to buddy allocator
> > as a high-order folio, e.g. a folio that contains 262144 pages
> > if the folio was a 1G HugeTLB hugepage.
> >
> > This is problematic if the HugeTLB hugepage contained HWPoison
> > subpages. In that case, since buddy allocator does not check
> > HWPoison for non-zero-order folio, the raw HWPoison page can
> > be given out with its buddy page and be re-used by either
> > kernel or userspace.
> >
> > Memory failure recovery (MFR) in kernel does attempt to take
> > raw HWPoison page off buddy allocator after
> > dissolve_free_hugetlb_folio. However, there is always a time
> > window between freed to buddy allocator and taken off from
> > buddy allocator.
> >
> > One obvious way to avoid this problem is to add page sanity
> > checks in page allocate or free path. However, it is against
> > the past efforts to reduce sanity check overhead [1,2,3].
> >
> > Introduce hugetlb_free_hwpoison_folio to solve this problem.
> > The idea is, in case a HugeTLB folio for sure contains HWPoison
> > page(s), first split the non-HugeTLB high-order folio uniformly
> > into 0-order folios, then let healthy pages join the buddy
> > allocator while reject the HWPoison ones.
> >
> > [1] https://lore.kernel.org/linux-mm/1460711275-1130-15-git-send-email-mgorman@techsingularity.net/
> > [2] https://lore.kernel.org/linux-mm/1460711275-1130-16-git-send-email-mgorman@techsingularity.net/
> > [3] https://lore.kernel.org/all/20230216095131.17336-1-vbabka@suse.cz
> >
> > Signed-off-by: Jiaqi Yan <jiaqiyan@...gle.com>
>
>
> [...]
>
> > /*
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index 3edebb0cda30b..e6a9deba6292a 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -2002,6 +2002,49 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags,
> > return ret;
> > }
> >
> > +void hugetlb_free_hwpoison_folio(struct folio *folio)
>
> What is hugetlb specific in here? :)
>
> Hint: if there is nothing, likely it should be generic infrastructure.
>
> But I would prefer if the page allocator could just take care of that
> when freeing a folio.
Ack, and if it could be taken care by page allocator, it would be
generic infrastructure
>
> --
> Cheers
>
> David
Powered by blists - more mailing lists