[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd206c0e-3d99-4656-ad2f-f57316232498@lucifer.local>
Date: Fri, 12 Sep 2025 15:01:02 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Kiryl Shutsemau <kas@...nel.org>,
Nico Pache <npache@...hat.com>, linux-mm@...ck.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, ziy@...dia.com,
baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
ryan.roberts@....com, dev.jain@....com, corbet@....net,
rostedt@...dmis.org, mhiramat@...nel.org,
mathieu.desnoyers@...icios.com, akpm@...ux-foundation.org,
baohua@...nel.org, willy@...radead.org, peterx@...hat.com,
wangkefeng.wang@...wei.com, usamaarif642@...il.com,
sunnanyong@...wei.com, vishal.moola@...il.com,
thomas.hellstrom@...ux.intel.com, yang@...amperecomputing.com,
aarcange@...hat.com, raquini@...hat.com, anshuman.khandual@....com,
catalin.marinas@....com, tiwai@...e.de, will@...nel.org,
dave.hansen@...ux.intel.com, jack@...e.cz, cl@...two.org,
jglisse@...gle.com, surenb@...gle.com, zokeefe@...gle.com,
rientjes@...gle.com, mhocko@...e.com, rdunlap@...radead.org,
hughd@...gle.com, richard.weiyang@...il.com, lance.yang@...ux.dev,
vbabka@...e.cz, rppt@...nel.org, jannh@...gle.com, pfalcato@...e.de
Subject: Re: [PATCH v11 00/15] khugepaged: mTHP support
On Fri, Sep 12, 2025 at 03:46:36PM +0200, David Hildenbrand wrote:
> On 12.09.25 15:37, Johannes Weiner wrote:
> > On Fri, Sep 12, 2025 at 02:25:31PM +0200, David Hildenbrand wrote:
> > > On 12.09.25 14:19, Kiryl Shutsemau wrote:
> > > > On Thu, Sep 11, 2025 at 09:27:55PM -0600, Nico Pache wrote:
> > > > > The following series provides khugepaged with the capability to collapse
> > > > > anonymous memory regions to mTHPs.
> > > > >
> > > > > To achieve this we generalize the khugepaged functions to no longer depend
> > > > > on PMD_ORDER. Then during the PMD scan, we use a bitmap to track individual
> > > > > pages that are occupied (!none/zero). After the PMD scan is done, we do
> > > > > binary recursion on the bitmap to find the optimal mTHP sizes for the PMD
> > > > > range. The restriction on max_ptes_none is removed during the scan, to make
> > > > > sure we account for the whole PMD range. When no mTHP size is enabled, the
> > > > > legacy behavior of khugepaged is maintained. max_ptes_none will be scaled
> > > > > by the attempted collapse order to determine how full a mTHP must be to be
> > > > > eligible for the collapse to occur. If a mTHP collapse is attempted, but
> > > > > contains swapped out, or shared pages, we don't perform the collapse. It is
> > > > > now also possible to collapse to mTHPs without requiring the PMD THP size
> > > > > to be enabled.
> > > > >
> > > > > When enabling (m)THP sizes, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on
> > > > > 4K page size), it will be automatically capped to HPAGE_PMD_NR/2 - 1 for
> > > > > mTHP collapses to prevent collapse "creep" behavior. This prevents
> > > > > constantly promoting mTHPs to the next available size, which would occur
> > > > > because a collapse introduces more non-zero pages that would satisfy the
> > > > > promotion condition on subsequent scans.
> > > >
> > > > Hm. Maybe instead of capping at HPAGE_PMD_NR/2 - 1 we can count
> > > > all-zeros 4k as none_or_zero? It mirrors the logic of shrinker.
> > > >
> > >
> > > I am all for not adding any more ugliness on top of all the ugliness we
> > > added in the past.
> > >
> > > I will soon propose deprecating that parameter in favor of something
> > > that makes a bit more sense.
> > >
> > > In essence, we'll likely have an "eagerness" parameter that ranges from
> > > 0 to 10. 10 is essentially "always collapse" and 0 "never collapse if
> > > not all is populated".
> > >
> > > In between we will have more flexibility on how to set these values.
> > >
> > > Likely 9 will be around 50% to not even motivate the user to set
> > > something that does not make sense (creep).
> >
> > One observation we've had from production experiments is that the
> > optimal number here isn't static. If you have plenty of memory, then
> > even very sparse THPs are beneficial.
>
> Exactly.
>
> And willy suggested something like "eagerness" similar to "swapinness" that
> gives us more flexibility when implementing it, including dynamically
> adjusting the values in the future.
I like the idea of abstracting it like this, and - in a rare case of kernel
developer agreement (esp. around naming :) - both Matthew, David and I rather
loved referring to this as 'eagerness' here :)
The great benefit in relation to dynamic state is that we can simply treat this
as an _abstract_ thing. I.e. 'how eager are we to establish THPs, trading off
against memory pressure and higher order folio resource consumption'.
And then we can decide how precisely that is implemented in practice - and a
sensible approach would indeed be to differentiate between scenarios where we
might be more willing to chomp up memory vs. those we are not.
This also aligns nicely with the 'grand glorious future' we all dream off (don't
we??) in THP where things are automated as much as possible and the _kernel
decides_ what's best as far as is possible.
As with swappiness, it is essentially a 'hint' to us in abstract terms rather
than simply exposing an internal kernel parameter.
(Credit to Matthew for making this abstraction suggestion in the THP cabal
meeting by the way!)
>
> >
> > An extreme example: if all your THPs have 2/512 pages populated,
> > that's still cutting TLB pressure in half!
>
> IIRC, you create more pressure on the huge entries, where you might have
> less TLB entries :) But yes, there can be cases where it is beneficial, if
> there is absolutely no memory pressure.
>
> >
> > So in the absence of memory pressure, allocating and collapsing should
> > optimally be aggressive even on very sparse regions.
>
> Yes, we discussed that as well in the THP cabal.
>
> It's very similar to the max_ptes_swapped: that parameter should not exist.
> If there is no memory pressure we can just swap it in. If there is memory
> pressure we probably would not want to swap in much.
Yes, but at least an eagerness parameter gets us closer to this ideal.
Of course, I agree that max_ptes_none should simply never have been exposed like
this. It is emblematic of a 'just shove a parameter into a tunable/sysfs and let
the user decide' approach you see in the kernel sometimes.
This is problmeatic as users have no earthly idea how to set the parameter (most
likely never touch it), and only start fiddling should issues arise and it looks
like a viable solution of some kind.
The problem is users usually lack a great deal of context the kernel has, and
may make incorrect decisions that work in one situation but not another.
TL;DR - this kind of interface is just lazy and we have to assess these kinds of
tunables based on the actual RoI + understanding from the user's perspective.
>
> >
> > On the flipside, if there is memory pressure, TLB benefits are very
> > quickly drowned out by faults and paging events. And I mean real
> > memory pressure. If all that's happening is that somebody is streaming
> > through filesystem data, the optimal behavior is still to be greedy.
> >
> > Another consideration is that if we need to break large folios, we
> > should start with colder ones that provide less benefit, and defer the
> > splitting of hotter ones as long as possible.
>
> Yes, we discussed that as well: there is no QoS right now, which is rather
> suboptimal.
It's also kinda funny that the max_pte_none default is 511 right now so pretty
damn eager. Which might be part of the reason people often observe THP chomping
through resources...
>
> >
> > Maybe a good direction would be to move splitting out of the shrinker
> > and tie it to the (refault-aware) anon reclaim. And then instead of a
> > fixed population threshold, collapse on a pressure gradient that
> > starts with "no pressure/thrashing and at least two base pages in THP
> > a region" and ends with "reclaim is splitting everything, back off".
>
> I agree, but have to think further about how that could work in practice.
That'd be lovely actually!
>
> --
> Cheers
>
> David / dhildenb
>
Cheers, Lorenzo
Powered by blists - more mailing lists