lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <k54teuep6r63gbgivpka32tk47zvzmy5thik2mekl5xpycvead@fth2lv4kuicg>
Date: Fri, 12 Sep 2025 16:38:59 +0100
From: Kiryl Shutsemau <kas@...nel.org>
To: Pedro Falcato <pfalcato@...e.de>
Cc: David Hildenbrand <david@...hat.com>,
 	Johannes Weiner <hannes@...xchg.org>, Nico Pache <npache@...hat.com>,
 linux-mm@...ck.org, 	linux-doc@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
 	ziy@...dia.com, baolin.wang@...ux.alibaba.com,
 lorenzo.stoakes@...cle.com, 	Liam.Howlett@...cle.com,
 ryan.roberts@....com, dev.jain@....com, corbet@....net,
 	rostedt@...dmis.org, mhiramat@...nel.org,
 mathieu.desnoyers@...icios.com, 	akpm@...ux-foundation.org,
 baohua@...nel.org, willy@...radead.org, peterx@...hat.com,
 	wangkefeng.wang@...wei.com, usamaarif642@...il.com,
 sunnanyong@...wei.com, 	vishal.moola@...il.com,
 thomas.hellstrom@...ux.intel.com, yang@...amperecomputing.com,
 	aarcange@...hat.com, raquini@...hat.com, anshuman.khandual@....com,
 	catalin.marinas@....com, tiwai@...e.de, will@...nel.org,
 dave.hansen@...ux.intel.com, 	jack@...e.cz, cl@...two.org,
 jglisse@...gle.com, surenb@...gle.com, 	zokeefe@...gle.com,
 rientjes@...gle.com, mhocko@...e.com, rdunlap@...radead.org,
 	hughd@...gle.com, richard.weiyang@...il.com, lance.yang@...ux.dev,
 vbabka@...e.cz, 	rppt@...nel.org, jannh@...gle.com
Subject: Re: [PATCH v11 00/15] khugepaged: mTHP support

On Fri, Sep 12, 2025 at 04:15:23PM +0100, Pedro Falcato wrote:
> On Fri, Sep 12, 2025 at 03:46:36PM +0200, David Hildenbrand wrote:
> > On 12.09.25 15:37, Johannes Weiner wrote:
> > > On Fri, Sep 12, 2025 at 02:25:31PM +0200, David Hildenbrand wrote:
> > > > On 12.09.25 14:19, Kiryl Shutsemau wrote:
> > > > > On Thu, Sep 11, 2025 at 09:27:55PM -0600, Nico Pache wrote:
> > > > > > The following series provides khugepaged with the capability to collapse
> > > > > > anonymous memory regions to mTHPs.
> > > > > > 
> > > > > > To achieve this we generalize the khugepaged functions to no longer depend
> > > > > > on PMD_ORDER. Then during the PMD scan, we use a bitmap to track individual
> > > > > > pages that are occupied (!none/zero). After the PMD scan is done, we do
> > > > > > binary recursion on the bitmap to find the optimal mTHP sizes for the PMD
> > > > > > range. The restriction on max_ptes_none is removed during the scan, to make
> > > > > > sure we account for the whole PMD range. When no mTHP size is enabled, the
> > > > > > legacy behavior of khugepaged is maintained. max_ptes_none will be scaled
> > > > > > by the attempted collapse order to determine how full a mTHP must be to be
> > > > > > eligible for the collapse to occur. If a mTHP collapse is attempted, but
> > > > > > contains swapped out, or shared pages, we don't perform the collapse. It is
> > > > > > now also possible to collapse to mTHPs without requiring the PMD THP size
> > > > > > to be enabled.
> > > > > > 
> > > > > > When enabling (m)THP sizes, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on
> > > > > > 4K page size), it will be automatically capped to HPAGE_PMD_NR/2 - 1 for
> > > > > > mTHP collapses to prevent collapse "creep" behavior. This prevents
> > > > > > constantly promoting mTHPs to the next available size, which would occur
> > > > > > because a collapse introduces more non-zero pages that would satisfy the
> > > > > > promotion condition on subsequent scans.
> > > > > 
> > > > > Hm. Maybe instead of capping at HPAGE_PMD_NR/2 - 1 we can count
> > > > > all-zeros 4k as none_or_zero? It mirrors the logic of shrinker.
> > > > > 
> > > > 
> > > > I am all for not adding any more ugliness on top of all the ugliness we
> > > > added in the past.
> > > > 
> > > > I will soon propose deprecating that parameter in favor of something
> > > > that makes a bit more sense.
> > > > 
> > > > In essence, we'll likely have an "eagerness" parameter that ranges from
> > > > 0 to 10. 10 is essentially "always collapse" and 0 "never collapse if
> > > > not all is populated".
> > > > 
> > > > In between we will have more flexibility on how to set these values.
> > > > 
> > > > Likely 9 will be around 50% to not even motivate the user to set
> > > > something that does not make sense (creep).
> > > 
> > > One observation we've had from production experiments is that the
> > > optimal number here isn't static. If you have plenty of memory, then
> > > even very sparse THPs are beneficial.
> > 
> > Exactly.
> > 
> > And willy suggested something like "eagerness" similar to "swapinness" that
> > gives us more flexibility when implementing it, including dynamically
> > adjusting the values in the future.
> >
> 
> Ideally we would be able to also apply this to the page faulting paths.
> In many cases, there's no good reason to create a THP on the first fault...
> 
> > > 
> > > An extreme example: if all your THPs have 2/512 pages populated,
> > > that's still cutting TLB pressure in half!
> > 
> > IIRC, you create more pressure on the huge entries, where you might have
> > less TLB entries :) But yes, there can be cases where it is beneficial, if
> > there is absolutely no memory pressure.
> >
> 
> Correct, but it depends on the microarchitecture. For modern x86_64 AMD, it
> happens that the L1 TLB entries are shared between 4K/2M/1G. This was not
> (is not?) the case for Intel, where e.g back on kabylake, you had separate
> entries for 4K/2MB/1GB.

On Intel secondary TLB is shared between 4k and 2M. L2 TLB for 1G is
separate.

> Maybe in the Great Glorious Future (how many of those do we have?!) it would
> be a good idea to take this kinds of things into account. Just because we can
> map a THP, doesn't mean we should.
> 
> Shower thought: it might be in these cases especially where the FreeBSD
> reservation system comes in handy - best effort allocating a THP, but not
> actually mapping it as such until you really _know_ it is hot - and until
> then, memory reclaim can just break your THP down if it really needs to.

This is just silly. All downsides without benefit until maybe later. And
for short-lived processes the "later" never comes.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ