[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <pr22ew7pmqercu5tlabw2ros4cdeoyhlqbqmogvfqgekesfbfz@f5nls3gxj76t>
Date: Thu, 18 Dec 2025 22:18:15 +0000
From: Kiryl Shutsemau <kas@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>, David Hildenbrand <david@...nel.org>,
Matthew Wilcox <willy@...radead.org>, Usama Arif <usamaarif642@...il.com>,
Frank van der Linden <fvdl@...gle.com>
Cc: Oscar Salvador <osalvador@...e.de>, Mike Rapoport <rppt@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Zi Yan <ziy@...dia.com>, Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, kernel-team@...a.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCHv2 00/14] Eliminate fake head pages from vmemmap
optimization
Oopsie. Add the Subject.
On Thu, Dec 18, 2025 at 03:09:31PM +0000, Kiryl Shutsemau wrote:
> This series removes "fake head pages" from the HugeTLB vmemmap
> optimization (HVO) by changing how tail pages encode their relationship
> to the head page.
>
> It simplifies compound_head() and page_ref_add_unless(). Both are in the
> hot path.
>
> Background
> ==========
>
> HVO reduces memory overhead by freeing vmemmap pages for HugeTLB pages
> and remapping the freed virtual addresses to a single physical page.
> Previously, all tail page vmemmap entries were remapped to the first
> vmemmap page (containing the head struct page), creating "fake heads" -
> tail pages that appear to have PG_head set when accessed through the
> deduplicated vmemmap.
>
> This required special handling in compound_head() to detect and work
> around fake heads, adding complexity and overhead to a very hot path.
>
> New Approach
> ============
>
> For architectures/configs where sizeof(struct page) is a power of 2 (the
> common case), this series changes how position of the head page is encoded
> in the tail pages.
>
> Instead of storing a pointer to the head page, the ->compound_info
> (renamed from ->compound_head) now stores a mask.
>
> The mask can be applied to any tail page's virtual address to compute
> the head page address. Critically, all tail pages of the same order now
> have identical compound_info values, regardless of which compound page
> they belong to.
>
> The key insight is that all tail pages of the same order now have
> identical compound_info values, regardless of which compound page they
> belong to. This allows a single page of tail struct pages to be shared
> across all huge pages of the same order on a NUMA node.
>
> Benefits
> ========
>
> 1. Simplified compound_head(): No fake head detection needed, can be
> implemented in a branchless manner.
>
> 2. Simplified page_ref_add_unless(): RCU protection removed since there's
> no race with fake head remapping.
>
> 3. Cleaner architecture: The shared tail pages are truly read-only and
> contain valid tail page metadata.
>
> If sizeof(struct page) is not power-of-2, there are no functional changes.
> HVO is not supported in this configuration.
>
> I had hoped to see performance improvement, but my testing thus far has
> shown either no change or only a slight improvement within the noise.
>
> Series Organization
> ===================
>
> Patches 1-2: Preparation - move MAX_FOLIO_ORDER, add alignment check
> Patches 3-5: Refactoring - interface changes, field rename, code movement
> Patch 6: Core change - new mask-based compound_head() encoding
> Patch 7: Correctness fix - page_zonenum() must use head page
> Patch 8: Refactor vmemmap_walk for new design
> Patch 9: Eliminate fake heads with shared tail pages
> Patches 10-13: Cleanup - remove fake head infrastructure
> Patch 14: Documentation update
>
> Changes in v2:
> ==============
>
> - Handle boot-allocated huge pages correctly. (Frank)
>
> - Changed from per-hstate vmemmap_tail to per-node vmemmap_tails[] array
> in pglist_data. (Muchun)
>
> - Added spin_lock(&hugetlb_lock) protection in vmemmap_get_tail() to fix
> a race condition where two threads could both allocate tail pages.
> The losing thread now properly frees its allocated page. (Usama)
>
> - Add warning if memmap is not aligned to MAX_FOLIO_SIZE, which is
> required for the mask approach. (Muchun)
>
> - Make page_zonenum() use head page - correctness fix since shared
> tail pages cannot have valid zone information. (Muchun)
>
> - Added 'const' qualifier to head parameter in set_compound_head() and
> prep_compound_tail(). (Usama)
>
> - Updated commit messages.
>
> Kiryl Shutsemau (14):
> mm: Move MAX_FOLIO_ORDER definition to mmzone.h
> mm/sparse: Check memmap alignment
> mm: Change the interface of prep_compound_tail()
> mm: Rename the 'compound_head' field in the 'struct page' to
> 'compound_info'
> mm: Move set/clear_compound_head() next to compound_head()
> mm: Rework compound_head() for power-of-2 sizeof(struct page)
> mm: Make page_zonenum() use head page
> mm/hugetlb: Refactor code around vmemmap_walk
> mm/hugetlb: Remove fake head pages
> mm: Drop fake head checks
> hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU
> mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key
> mm: Remove the branch from compound_head()
> hugetlb: Update vmemmap_dedup.rst
>
> .../admin-guide/kdump/vmcoreinfo.rst | 2 +-
> Documentation/mm/vmemmap_dedup.rst | 62 ++--
> include/linux/mm.h | 31 --
> include/linux/mm_types.h | 20 +-
> include/linux/mmzone.h | 47 +++
> include/linux/page-flags.h | 163 ++++-------
> include/linux/page_ref.h | 8 +-
> include/linux/types.h | 2 +-
> kernel/vmcore_info.c | 2 +-
> mm/hugetlb.c | 8 +-
> mm/hugetlb_vmemmap.c | 270 +++++++++---------
> mm/internal.h | 12 +-
> mm/mm_init.c | 2 +-
> mm/page_alloc.c | 4 +-
> mm/slab.h | 2 +-
> mm/sparse-vmemmap.c | 44 ++-
> mm/sparse.c | 3 +
> mm/util.c | 16 +-
> 18 files changed, 345 insertions(+), 353 deletions(-)
>
> --
> 2.51.2
>
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists