[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aVQ7RwxRaXC5kAG2@casper.infradead.org>
Date: Tue, 30 Dec 2025 20:51:19 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Chris Mason <clm@...a.com>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>, Zi Yan <ziy@...dia.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Qi Zheng <qi.zheng@...ux.dev>, hannes@...xchg.org, hughd@...gle.com,
mhocko@...e.com, muchun.song@...ux.dev, david@...nel.org,
lorenzo.stoakes@...cle.com, harry.yoo@...cle.com,
imran.f.khan@...cle.com, kamalesh.babulal@...cle.com,
axelrasmussen@...gle.com, yuanchu@...gle.com, weixugc@...gle.com,
chenridong@...weicloud.com, mkoutny@...e.com,
akpm@...ux-foundation.org, hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com, lance.yang@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>, Chris Mason <clm@...com>
Subject: Re: [PATCH v2 00/28] Eliminate Dying Memory Cgroup
On Tue, Dec 30, 2025 at 02:18:51PM -0500, Chris Mason wrote:
> >>>> I just think you should do a preliminary review of the AI review results
> >>>> instead of sending them out directly. Otherwise, if everyone does this,
> >>>> the community will be full of bots.
>
> I do think it's awkward to dump the whole review output for the patch
> series in a single message. It looks like there's a sudden jump to XML?
> It's better to reply to the individual patches with the comments
> inline, which I think is where Roman is trying to go long term.
I don't know what Roman's trying to do long-term, but his email
that started this thread was so badly written that it was offensive.
Had it been sent to me, I would have responded in the style of Arkell
v Pressdram.
> With BPF, it looks more like this:
> https://lore.kernel.org/bpf/?q=AI+reviewed+your+patch
That's actually useful.
> >>>> 2. Looking at the mm prompt: https://github.com/masoncl/review-prompts/blob/main/mm.md , are you sure the patterns are all right?
> >> a. Page/Folio States, Large folios require per-page state tracking for
> >> Reference counts. I thought we want to get rid of per page refcount.
>
> Early in prompt development I hand picked a few hundred patches from
> 6.16 fixing bugs, and I iterated on these adding subsystem knowledge to
> catch the known bugs. That's where that rule came from, but as you say
> there's a risk this information gets old. Do we want to get rid of per
> page refcounts or have we done it? (more on that at the bottom of the
> email).
There is no such thing as a per-page reference count. Any attempt to
access the page reference count redirects to the folio refcount. This
has been the case since 2016 (four years before folios existed). See
commit ddc58f27f9ee.
We do want to git rid of calls to get_page() and put_page() for a
variety of reasons that will be long and painful to write out.
> As an example of how I'd fix the prompt if the per page state tracking
> were causing problems (and if we didn't want to just remove it), I asked
> claude to analyze how it is still used. The output is below, I'd double
> check things as best I could, shorten into prompt form and send to the
> list for review.
>
> Per-Page Tracking in Large Folios - Analysis
> =============================================
>
> Based on analysis of mm/*.c files and commit history, MM-004's claim is
> still partially true - large folios do need per-page tracking for some
> bits, though recent work has significantly reduced this.
>
>
> Bits That Still Require Per-Page Tracking
> ------------------------------------------
>
> 1. PG_hwpoison (include/linux/page-flags.h:118)
>
> Defined as PAGEFLAG(HWPoison, hwpoison, PF_ANY), this flag is set on
> individual pages within a large folio when hardware memory corruption
> is detected.
>
> The folio_test_has_hwpoisoned() flag on the second page indicates at
> least one subpage is poisoned, but does not identify which one.
>
> When splitting a large folio, page_range_has_hwpoisoned() in
> mm/huge_memory.c:3467 iterates through pages checking PageHWPoison()
> for each:
>
> static bool page_range_has_hwpoisoned(struct page *page, long nr_pages)
> {
> for (; nr_pages; page++, nr_pages--)
> if (PageHWPoison(page))
> return true;
> return false;
> }
>
> Used in rmap code (mm/rmap.c:1990, 2070, 2473) to check individual
> subpages when unmapping or migrating.
>
> 2. PG_anon_exclusive (include/linux/page-flags.h:146)
>
> Per the comment at include/linux/page-flags.h:139-145:
>
> "Depending on the way an anonymous folio can be mapped into a page
> table (e.g., single PMD/PUD/CONT of the head page vs. PTE-mapped
> THP), PG_anon_exclusive may be set only for the head page or for
> tail pages of an anonymous folio. For now, we only expect it to be
> set on tail pages for PTE-mapped THP."
>
> Used at mm/rmap.c:1408-1416: when RMAP_EXCLUSIVE flag is set for
> PTE-level mappings, it iterates through each page:
>
> for (i = 0; i < nr_pages; i++)
> SetPageAnonExclusive(page + i);
>
> HugeTLB stores this on head page only (see PageAnonExclusive() at
> include/linux/page-flags.h:1153-1162), but PTE-mapped THP needs
> per-page tracking.
>
>
> Recent Changes - Per-Page Mapcount Removed
> ------------------------------------------
>
> Commit 749492229e3bd ("mm: stop maintaining the per-page mapcount of
> large folios") by David Hildenbrand (March 2025) introduced
> CONFIG_NO_PAGE_MAPCOUNT which:
>
> - Stops maintaining per-page mapcounts in tail pages of large folios
> - Tail page mapcount is now always logically 0 (-1 value)
> - Removed _nr_pages_mapped tracking
>
> This was a significant simplification, but it does not affect the
> per-page flag tracking described above.
>
>
> Flags Stored in Second Page Only (Not Per-Page)
> -----------------------------------------------
>
> These are stored in the first tail page (FOLIO_SECOND_PAGE) and apply to
> the entire folio, not individual pages:
>
> - PG_has_hwpoisoned - indicates some page in folio is poisoned
> - PG_large_rmappable - folio is rmappable
> - PG_partially_mapped - folio is partially mapped
>
> See PAGE_FLAGS_SECOND definition at include/linux/page-flags.h:1218-1220.
>
>
> Conclusion
> ----------
>
> While per-page mapcount has been eliminated, PG_hwpoison and
> PG_anon_exclusive (for PTE-mapped THP) still require per-page tracking
> in large folios. MM-004's claim remains valid for these specific bits.
>
> Key source files:
> - include/linux/page-flags.h (flag definitions and accessors)
> - mm/huge_memory.c (folio split handling)
> - mm/rmap.c (reverse mapping with per-page exclusive tracking)
This is pretty good and yet dangerously wrong in some missed nuances.
Which probably summarises the state of the art nicely ;-)
To start with, all flags marked as PF_ANY are set on individual pages
rather than only the folio. So that's currently:
PAGEFLAG(Private, private, PF_ANY)
PAGEFLAG(HWPoison, hwpoison, PF_ANY)
PAGEFLAG(VmemmapSelfHosted, vmemmap_self_hosted, PF_ANY)
__SETPAGEFLAG(Head, head, PF_ANY)
return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
Now, PG_private is a flag we're trying to get rid of -- it should be
identical to (folio->private != NULL), so I haven't made any effort
to convert that from being PF_ANY. I'm not too unhappy that your chatbot
doesn't talk about PG_private, but a more full answer would include
mention of this.
PG_hwpoison and PG_anon_exclusive will remain per-page state in a
memdesc world, and there's a plan to handle those, so there's no need to
eliminate them.
PG_vmemmap_self_hosted is a very, very internal flag. It's OK to not
know about it.
PG_head has to remain per-page state for now for obvious reasons ;-)
In a memdesc word, there will be no way to ask if a page is the first
page of an allocation, so this flag will not be needed.
I believe there are some subtleties around PG_hwpoison and hugetlb that
are not fully captured above, but I'm not convinced of my ability to
state definitely what they currently are, so I'll leve that for somebody
else to do.
---
Looking through your prompts, there are definitely some conditions that
could be profitably added. For example, pages which are mapped into
page tables must be PG_uptodate (we have various assertions in the MM
code that this is true and they occasionally trigger).
Powered by blists - more mailing lists