lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACw3F53Rck2Bf_C45Uk=A1NJ4zB1B0R1+GqvkNxsz3h3mDx-pQ@mail.gmail.com>
Date: Tue, 18 Nov 2025 11:26:27 -0800
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: Harry Yoo <harry.yoo@...cle.com>
Cc: Matthew Wilcox <willy@...radead.org>, ziy@...dia.com, david@...hat.com, 
	Vlastimil Babka <vbabka@...e.cz>, nao.horiguchi@...il.com, linmiaohe@...wei.com, 
	lorenzo.stoakes@...cle.com, william.roche@...cle.com, tony.luck@...el.com, 
	wangkefeng.wang@...wei.com, jane.chu@...cle.com, akpm@...ux-foundation.org, 
	osalvador@...e.de, muchun.song@...ux.dev, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org, 
	Michal Hocko <mhocko@...e.com>, Suren Baghdasaryan <surenb@...gle.com>, 
	Brendan Jackman <jackmanb@...gle.com>, Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v1 1/2] mm/huge_memory: introduce uniform_split_unmapped_folio_to_zero_order

On Tue, Nov 18, 2025 at 2:20 AM Harry Yoo <harry.yoo@...cle.com> wrote:
>
> On Mon, Nov 17, 2025 at 10:24:27PM -0800, Jiaqi Yan wrote:
> > On Mon, Nov 17, 2025 at 5:43 AM Matthew Wilcox <willy@...radead.org> wrote:
> > >
> > > On Mon, Nov 17, 2025 at 12:15:23PM +0900, Harry Yoo wrote:
> > > > On Sun, Nov 16, 2025 at 11:51:14AM +0000, Matthew Wilcox wrote:
> > > > > But since we're only doing this on free, we won't need to do folio
> > > > > allocations at all; we'll just be able to release the good pages to the
> > > > > page allocator and sequester the hwpoison pages.
> > > >
> > > > [+Cc PAGE ALLOCATOR folks]
> > > >
> > > > So we need an interface to free only healthy portion of a hwpoison folio.
> >
> > +1, with some of my own thoughts below.
> >
> > > >
> > > > I think a proper approach to this should be to "free a hwpoison folio
> > > > just like freeing a normal folio via folio_put() or free_frozen_pages(),
> > > > then the page allocator will add only healthy pages to the freelist and
> > > > isolate the hwpoison pages". Oherwise we'll end up open coding a lot,
> > > > which is too fragile.
> > >
> > > Yes, I think it should be handled by the page allocator.  There may be
> >
> > I agree with Matthew, Harry, and David. The page allocator seems best
> > suited to handle HWPoison subpages without any new folio allocations.
>
> Sorry I should have been clearer. I don't think adding an **explicit**
> interface to free an hwpoison folio is worth; instead implicitly
> handling during freeing of a folio seems more feasible.

That's fine with me, just more to be taken care of by page allocator.

>
> > > some complexity to this that I've missed, eg if hugetlb wants to retain
> > > the good 2MB chunks of a 1GB allocation.  I'm not sure that's a useful
> > > thing to do or not.
> > >
> > > > In fact, that can be done by teaching free_pages_prepare() how to handle
> > > > the case where one or more subpages of a folio are hwpoison pages.
> > > >
> > > > How this should be implemented in the page allocator in memdescs world?
> > > > Hmm, we'll want to do some kind of non-uniform split, without actually
> > > > splitting the folio but allocating struct buddy?
> > >
> > > Let me sketch that out, realising that it's subject to change.
> > >
> > > A page in buddy state can't need a memdesc allocated.  Otherwise we're
> > > allocating memory to free memory, and that way lies madness.  We can't
> > > do the hack of "embed struct buddy in the page that we're freeing"
> > > because HIGHMEM.  So we'll never shrink struct page smaller than struct
> > > buddy (which is fine because I've laid out how to get to a 64 bit struct
> > > buddy, and we're probably two years from getting there anyway).
> > >
> > > My design for handling hwpoison is that we do allocate a struct hwpoison
> > > for a page.  It looks like this (for now, in my head):
> > >
> > > struct hwpoison {
> > >         memdesc_t original;
> > >         ... other things ...
> > > };
> > >
> > > So we can replace the memdesc in a page with a hwpoison memdesc when we
> > > encounter the error.  We still need a folio flag to indicate that "this
> > > folio contains a page with hwpoison".  I haven't put much thought yet
> > > into interaction with HUGETLB_PAGE_OPTIMIZE_VMEMMAP; maybe "other things"
> > > includes an index of where the actually poisoned page is in the folio,
> > > so it doesn't matter if the pages alias with each other as we can recover
> > > the information when it becomes useful to do so.
> > >
> > > > But... for now I think hiding this complexity inside the page allocator
> > > > is good enough. For now this would just mean splitting a frozen page
> >
> > I want to add one more thing. For HugeTLB, kernel clears the HWPoison
> > flag on the folio and move it to every raw pages in raw_hwp_page list
> > (see folio_clear_hugetlb_hwpoison). So page allocator has no hint that
> > some pages passed into free_frozen_pages has HWPoison. It has to
> > traverse 2^order pages to tell, if I am not mistaken, which goes
> > against the past effort to reduce sanity checks. I believe this is one
> > reason I choosed to handle the problem in hugetlb / memory-failure.
>
> I think we can skip calling folio_clear_hugetlb_hwpoison() and teach the

Nit: also skip folio_free_raw_hwp so the hugetlb-specific llist
containing the raw pages and owned by memory-failure is preserved? And
expect the page allocator to use it for whatever purpose then free the
llist? Doesn't seem to follow the correct ownership rule.

> buddy allocator to handle this. free_pages_prepare() already handles
> (PageHWPoison(page) && !order) case, we just need to extend that to
> support hugetlb folios as well.
>
> > For the new interface Harry requested, is it the caller's
> > responsibility to ensure that the folio contains HWPoison pages (to be
> > even better, maybe point out the exact ones?), so that page allocator
> > at least doesn't waste cycles to search non-exist HWPoison in the set
> > of pages?
>
> With implicit handling it would be the page allocator's responsibility
> to check and handle hwpoison hugetlb folios.

Does this mean we must bake hugetlb-specific logic in the page
allocator's freeing path? AFAICT today the contract in
free_frozen_page doesn't contain much hugetlb info.

I saw there is already some hugetlb-specific logic in page_alloc.c,
but perhaps not valid for adding more.

>
> > Or caller and page allocator need to agree on some contract? Say
> > caller has to set has_hwpoisoned flag in non-zero order folio to free.
> > This allows the old interface free_frozen_pages an easy way using the
> > has_hwpoison flag from the second page. I know has_hwpoison is "#if
> > defined" on THP and using it for hugetlb probably is not very clean,
> > but are there other concerns?
>
> As you mentioned has_hwpoisoned is used for THPs and for a hugetlb
> folio. But for a hugetlb folio folio_test_hwpoison() returns true
> if it has at least one hwpoison pages (assuming that we don't clear it
> before freeing).
>
> So in free_pages_prepare():
>
> if (folio_test_hugetlb(folio) && folio_test_hwpoison(folio)) {
>   /*
>    * Handle hwpoison hugetlb folios; transfer the error information
>    * to individual pages, clear hwpoison flag of the folio,
>    * perform non-uniform split on the frozen folio.
>    */
> } else if (PageHWPoison(page) && !order) {
>   /* We already handle this in the allocator. */
> }
>
> This would be sufficient?

Wouldn't this confuse the page allocator into thinking the healthy
head page is HWPoison (when it actually isn't)? I thought that was one
of the reasons has_hwpoison exists.

>
> Or do we want to handle THPs as well, in case of split failure in
> memory_failure()? if so we need to handle folio_test_has_hwpoisoned()
> case as well...

Yeah, I think this is another good use case for our request to page allocator.

>
> > > > inside the page allocator (probably non-uniform?). We can later re-implement
> > > > this to provide better support for memdescs.
> > >
> > > Yes, I like this approach.  But then I'm not the page allocator
> > > maintainer ;-)
> >
> > If page allocator maintainers can weigh in here, that will be very helpful!
>
> Yeah, I'm not a maintainer either ;) it'll be great to get opinions
> from page allocator folks!
>
> --
> Cheers,
> Harry / Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ