[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <062900fa-6419-4748-81d1-9128ce6c46d0@kernel.org>
Date: Thu, 5 Feb 2026 13:56:36 +0100
From: "David Hildenbrand (Arm)" <david@...nel.org>
To: Kiryl Shutsemau <kas@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>, Matthew Wilcox <willy@...radead.org>,
Usama Arif <usamaarif642@...il.com>, Frank van der Linden <fvdl@...gle.com>
Cc: Oscar Salvador <osalvador@...e.de>, Mike Rapoport <rppt@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>,
Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>,
Huacai Chen <chenhuacai@...nel.org>, WANG Xuerui <kernel@...0n.name>,
Palmer Dabbelt <palmer@...belt.com>, Paul Walmsley
<paul.walmsley@...ive.com>, Albert Ou <aou@...s.berkeley.edu>,
Alexandre Ghiti <alex@...ti.fr>, kernel-team@...a.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
loongarch@...ts.linux.dev, linux-riscv@...ts.infradead.org
Subject: Re: [PATCHv6 06/17] LoongArch/mm: Align vmemmap to maximal folio size
On 2/4/26 17:56, David Hildenbrand (arm) wrote:
> On 2/2/26 16:56, Kiryl Shutsemau wrote:
>> The upcoming change to the HugeTLB vmemmap optimization (HVO) requires
>> struct pages of the head page to be naturally aligned with regard to the
>> folio size.
>>
>> Align vmemmap to MAX_FOLIO_NR_PAGES.
>>
>> Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
>> ---
>> arch/loongarch/include/asm/pgtable.h | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/
>> include/asm/pgtable.h
>> index c33b3bcb733e..f9416acb9156 100644
>> --- a/arch/loongarch/include/asm/pgtable.h
>> +++ b/arch/loongarch/include/asm/pgtable.h
>> @@ -113,7 +113,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE /
>> sizeof(unsigned long)];
>> min(PTRS_PER_PGD * PTRS_PER_PUD * PTRS_PER_PMD * PTRS_PER_PTE *
>> PAGE_SIZE, (1UL << cpu_vabits) / 2) - PMD_SIZE - VMEMMAP_SIZE -
>> KFENCE_AREA_SIZE)
>> #endif
>> -#define vmemmap ((struct page *)((VMALLOC_END + PMD_SIZE) &
>> PMD_MASK))
>> +#define VMEMMAP_ALIGN max(PMD_SIZE, MAX_FOLIO_NR_PAGES *
>> sizeof(struct page))
>> +#define vmemmap ((struct page *)(ALIGN(VMALLOC_END,
>> VMEMMAP_ALIGN)))
>
>
> Same comment, the "MAX_FOLIO_NR_PAGES * sizeof(struct page)" is just
> black magic here
> and the description of the situation is wrong.
>
> Maybe you want to pull the magic "MAX_FOLIO_NR_PAGES * sizeof(struct
> page)" into the core and call it
>
> #define MAX_FOLIO_VMEMMAP_ALIGN (MAX_FOLIO_NR_PAGES * sizeof(struct
> page))
>
> But then special case it base on (a) HVO being configured in an (b) HVO
> being possible
>
> #ifdef HUGETLB_PAGE_OPTIMIZE_VMEMMAP && is_power_of_2(sizeof(struct page)
> /* A very helpful comment explaining the situation. */
> #define MAX_FOLIO_VMEMMAP_ALIGN (MAX_FOLIO_NR_PAGES * sizeof(struct
> page))
> #else
> #define MAX_FOLIO_VMEMMAP_ALIGN 0
> #endif
>
> Something like that.
>
Thinking about this ...
the vmemmap start is always struct-page-aligned. Otherwise we'd be in
trouble already.
Isn't it then sufficient to just align the start to MAX_FOLIO_NR_PAGES?
Let's assume sizeof(struct page) == 64 and MAX_FOLIO_NR_PAGES = 512 for
simplicity.
vmemmap start would be multiples of 512 (0x0010000000).
512, 1024, 1536, 2048 ...
Assume we have an 256-pages folio at 1536+256 = 0x111000000
Assume we have the last page of that folio (0x011111111111), we would
just get to the start of that folio by AND-ing with ~(256-1).
Which case am I ignoring?
--
Cheers,
David
Powered by blists - more mailing lists