lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWkhbWR-3fWjeTaE@thinkstation>
Date: Thu, 15 Jan 2026 17:23:23 +0000
From: Kiryl Shutsemau <kas@...nel.org>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	Muchun Song <muchun.song@...ux.dev>, Matthew Wilcox <willy@...radead.org>, 
	Usama Arif <usamaarif642@...il.com>, Frank van der Linden <fvdl@...gle.com>, 
	Oscar Salvador <osalvador@...e.de>, Mike Rapoport <rppt@...nel.org>, 
	Vlastimil Babka <vbabka@...e.cz>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, 
	Zi Yan <ziy@...dia.com>, Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>, 
	Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, kernel-team@...a.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCHv3 10/15] mm/hugetlb: Remove fake head pages

On Thu, Jan 15, 2026 at 05:49:43PM +0100, David Hildenbrand (Red Hat) wrote:
> On 1/15/26 15:45, Kiryl Shutsemau wrote:
> > HugeTLB Vmemmap Optimization (HVO) reduces memory usage by freeing most
> > vmemmap pages for huge pages and remapping the freed range to a single
> > page containing the struct page metadata.
> > 
> > With the new mask-based compound_info encoding (for power-of-2 struct
> > page sizes), all tail pages of the same order are now identical
> > regardless of which compound page they belong to. This means the tail
> > pages can be truly shared without fake heads.
> > 
> > Allocate a single page of initialized tail struct pages per NUMA node
> > per order in the vmemmap_tails[] array in pglist_data. All huge pages
> > of that order on the node share this tail page, mapped read-only into
> > their vmemmap. The head page remains unique per huge page.
> > 
> > This eliminates fake heads while maintaining the same memory savings,
> > and simplifies compound_head() by removing fake head detection.
> > 
> > Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
> > ---
> >   include/linux/mmzone.h | 16 ++++++++++++++-
> >   mm/hugetlb_vmemmap.c   | 44 ++++++++++++++++++++++++++++++++++++++++--
> >   mm/sparse-vmemmap.c    | 44 ++++++++++++++++++++++++++++++++++--------
> >   3 files changed, 93 insertions(+), 11 deletions(-)
> > 
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 322ed4c42cfc..2ee3eb610291 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -82,7 +82,11 @@
> >    * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect
> >    * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit.
> >    */
> > -#define MAX_FOLIO_ORDER		get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G)
> > +#ifdef CONFIG_64BIT
> > +#define MAX_FOLIO_ORDER		(34 - PAGE_SHIFT)
> > +#else
> > +#define MAX_FOLIO_ORDER		(30 - PAGE_SHIFT)
> > +#endif
> 
> Where do these magic values stem from, and how do they related to the
> comment above that clearly spells out 16G vs. 1G ?

This doesn't change the resulting value: 1UL << 34 is 16GiB, 1UL << 30
is 1G. Subtract PAGE_SHIFT to get the order.

The change allows the value to be used to define NR_VMEMMAP_TAILS which
is used specify size of vmemmap_tails array.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ