lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <9C029C3B-E140-4FC2-A680-8580AC753B69@linux.dev>
Date: Tue, 20 Jan 2026 10:50:03 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Kiryl Shutsemau <kas@...nel.org>
Cc: "David Hildenbrand (Red Hat)" <david@...nel.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Matthew Wilcox <willy@...radead.org>,
 Usama Arif <usamaarif642@...il.com>,
 Frank van der Linden <fvdl@...gle.com>,
 Oscar Salvador <osalvador@...e.de>,
 Mike Rapoport <rppt@...nel.org>,
 Vlastimil Babka <vbabka@...e.cz>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Zi Yan <ziy@...dia.com>,
 Baoquan He <bhe@...hat.com>,
 Michal Hocko <mhocko@...e.com>,
 Johannes Weiner <hannes@...xchg.org>,
 Jonathan Corbet <corbet@....net>,
 kernel-team@...a.com,
 linux-mm@...ck.org,
 linux-kernel@...r.kernel.org,
 linux-doc@...r.kernel.org
Subject: Re: [PATCHv3 10/15] mm/hugetlb: Remove fake head pages



> On Jan 19, 2026, at 23:15, Kiryl Shutsemau <kas@...nel.org> wrote:
> 
> On Sat, Jan 17, 2026 at 10:38:48AM +0800, Muchun Song wrote:
>> 
>> 
>>> On Jan 16, 2026, at 23:52, Kiryl Shutsemau <kas@...nel.org> wrote:
>>> 
>>> On Fri, Jan 16, 2026 at 10:38:02AM +0800, Muchun Song wrote:
>>>> 
>>>> 
>>>>> On Jan 16, 2026, at 01:23, Kiryl Shutsemau <kas@...nel.org> wrote:
>>>>> 
>>>>> On Thu, Jan 15, 2026 at 05:49:43PM +0100, David Hildenbrand (Red Hat) wrote:
>>>>>> On 1/15/26 15:45, Kiryl Shutsemau wrote:
>>>>>>> HugeTLB Vmemmap Optimization (HVO) reduces memory usage by freeing most
>>>>>>> vmemmap pages for huge pages and remapping the freed range to a single
>>>>>>> page containing the struct page metadata.
>>>>>>> 
>>>>>>> With the new mask-based compound_info encoding (for power-of-2 struct
>>>>>>> page sizes), all tail pages of the same order are now identical
>>>>>>> regardless of which compound page they belong to. This means the tail
>>>>>>> pages can be truly shared without fake heads.
>>>>>>> 
>>>>>>> Allocate a single page of initialized tail struct pages per NUMA node
>>>>>>> per order in the vmemmap_tails[] array in pglist_data. All huge pages
>>>>>>> of that order on the node share this tail page, mapped read-only into
>>>>>>> their vmemmap. The head page remains unique per huge page.
>>>>>>> 
>>>>>>> This eliminates fake heads while maintaining the same memory savings,
>>>>>>> and simplifies compound_head() by removing fake head detection.
>>>>>>> 
>>>>>>> Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
>>>>>>> ---
>>>>>>> include/linux/mmzone.h | 16 ++++++++++++++-
>>>>>>> mm/hugetlb_vmemmap.c   | 44 ++++++++++++++++++++++++++++++++++++++++--
>>>>>>> mm/sparse-vmemmap.c    | 44 ++++++++++++++++++++++++++++++++++--------
>>>>>>> 3 files changed, 93 insertions(+), 11 deletions(-)
>>>>>>> 
>>>>>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>>>>>>> index 322ed4c42cfc..2ee3eb610291 100644
>>>>>>> --- a/include/linux/mmzone.h
>>>>>>> +++ b/include/linux/mmzone.h
>>>>>>> @@ -82,7 +82,11 @@
>>>>>>> * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect
>>>>>>> * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit.
>>>>>>> */
>>>>>>> -#define MAX_FOLIO_ORDER get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G)
>>>>>>> +#ifdef CONFIG_64BIT
>>>>>>> +#define MAX_FOLIO_ORDER (34 - PAGE_SHIFT)
>>>>>>> +#else
>>>>>>> +#define MAX_FOLIO_ORDER (30 - PAGE_SHIFT)
>>>>>>> +#endif
>>>>>> 
>>>>>> Where do these magic values stem from, and how do they related to the
>>>>>> comment above that clearly spells out 16G vs. 1G ?
>>>>> 
>>>>> This doesn't change the resulting value: 1UL << 34 is 16GiB, 1UL << 30
>>>>> is 1G. Subtract PAGE_SHIFT to get the order.
>>>>> 
>>>>> The change allows the value to be used to define NR_VMEMMAP_TAILS which
>>>>> is used specify size of vmemmap_tails array.
>>>> 
>>>> How about allocate ->vmemmap_tails array dynamically? If sizeof of struct
>>>> page is not power of two, then we could optimize away this array. Besides,
>>>> the original MAX_FOLIO_ORDER could work as well.
>>> 
>>> This is tricky.
>>> 
>>> We need vmemmap_tails array to be around early, in
>>> hugetlb_vmemmap_init_early(). By the time, we don't have slab
>>> functional yet.
>> 
>> I mean zero-size array at the end of pg_data_t, no slab is needed.
> 
> For !NUMA, the struct is in BSS. See contig_page_data.

Right. I missed that.

> 
> Dynamic array won't fly there.
> 
> -- 
>  Kiryl Shutsemau / Kirill A. Shutemov



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ