lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXJHI8El7QHXQuwT@thinkstation>
Date: Thu, 22 Jan 2026 17:59:48 +0000
From: Kiryl Shutsemau <kas@...nel.org>
To: Muchun Song <muchun.song@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	David Hildenbrand <david@...nel.org>, Matthew Wilcox <willy@...radead.org>, 
	Usama Arif <usamaarif642@...il.com>, Frank van der Linden <fvdl@...gle.com>, 
	Oscar Salvador <osalvador@...e.de>, Mike Rapoport <rppt@...nel.org>, 
	Vlastimil Babka <vbabka@...e.cz>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, 
	Zi Yan <ziy@...dia.com>, Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>, 
	Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, kernel-team@...a.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCHv4 07/14] mm/sparse: Check memmap alignment for
 compound_info_has_mask()

On Thu, Jan 22, 2026 at 10:02:24PM +0800, Muchun Song wrote:
> 
> 
> > On Jan 22, 2026, at 20:43, Kiryl Shutsemau <kas@...nel.org> wrote:
> > 
> > On Thu, Jan 22, 2026 at 07:42:47PM +0800, Muchun Song wrote:
> >> 
> >> 
> >>>> On Jan 22, 2026, at 19:33, Muchun Song <muchun.song@...ux.dev> wrote:
> >>> 
> >>> 
> >>> 
> >>>> On Jan 22, 2026, at 19:28, Kiryl Shutsemau <kas@...nel.org> wrote:
> >>>> 
> >>>> On Thu, Jan 22, 2026 at 11:10:26AM +0800, Muchun Song wrote:
> >>>>> 
> >>>>> 
> >>>>>> On Jan 22, 2026, at 00:22, Kiryl Shutsemau <kas@...nel.org> wrote:
> >>>>>> 
> >>>>>> If page->compound_info encodes a mask, it is expected that memmap to be
> >>>>>> naturally aligned to the maximum folio size.
> >>>>>> 
> >>>>>> Add a warning if it is not.
> >>>>>> 
> >>>>>> A warning is sufficient as MAX_FOLIO_ORDER is very rarely used, so the
> >>>>>> kernel is still likely to be functional if this strict check fails.
> >>>>>> 
> >>>>>> Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
> >>>>>> ---
> >>>>>> include/linux/mmzone.h | 1 +
> >>>>>> mm/sparse.c            | 5 +++++
> >>>>>> 2 files changed, 6 insertions(+)
> >>>>>> 
> >>>>>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >>>>>> index 390ce11b3765..7e4f69b9d760 100644
> >>>>>> --- a/include/linux/mmzone.h
> >>>>>> +++ b/include/linux/mmzone.h
> >>>>>> @@ -91,6 +91,7 @@
> >>>>>> #endif
> >>>>>> 
> >>>>>> #define MAX_FOLIO_NR_PAGES (1UL << MAX_FOLIO_ORDER)
> >>>>>> +#define MAX_FOLIO_SIZE (PAGE_SIZE << MAX_FOLIO_ORDER)
> >>>>>> 
> >>>>>> enum migratetype {
> >>>>>> MIGRATE_UNMOVABLE,
> >>>>>> diff --git a/mm/sparse.c b/mm/sparse.c
> >>>>>> index 17c50a6415c2..5f41a3edcc24 100644
> >>>>>> --- a/mm/sparse.c
> >>>>>> +++ b/mm/sparse.c
> >>>>>> @@ -600,6 +600,11 @@ void __init sparse_init(void)
> >>>>>> BUILD_BUG_ON(!is_power_of_2(sizeof(struct mem_section)));
> >>>>>> memblocks_present();
> >>>>>> 
> >>>>>> +  if (compound_info_has_mask()) {
> >>>>>> +  WARN_ON(!IS_ALIGNED((unsigned long)pfn_to_page(0),
> >>>>>> +     MAX_FOLIO_SIZE / sizeof(struct page)));
> >>>>> 
> >>>>> I still have concerns about this. If certain architectures or configurations,
> >>>>> especially when KASLR is enabled, do not meet the requirements during the
> >>>>> boot stage, only specific folios larger than a certain size might end up with
> >>>>> incorrect struct page entries as the system runs. How can we detect issues
> >>>>> arising from either updating the struct page or making incorrect logical
> >>>>> judgments based on information retrieved from the struct page?
> >>>>> 
> >>>>> After all, when we see this warning, we don't know when or if a problem will
> >>>>> occur in the future. It's like a time bomb in the system, isn't it? Therefore,
> >>>>> I would like to add a warning check to the memory allocation place, for
> >>>>> example:
> >>>>> 
> >>>>> WARN_ON(!IS_ALIGNED((unsigned long)&folio->page, folio_size / sizeof(struct page)));
> >>>> 
> >>>> I don't think it is needed. Any compound page usage would trigger the
> >>>> problem. It should happen pretty early.
> >>> 
> >>> Why would you think it would be discovered early? If the alignment of struct page
> >>> can only meet the needs of 4M pages (i.e., the largest pages that buddy can
> >>> allocate), how can you be sure that there will be a similar path using CMA
> >>> early on if the system allocates through CMA in the future (after all, CMA
> >>> is used much less than buddy)?
> > 
> > True.
> > 
> >> Suppose we are more aggressive. If the alignment requirement of struct page
> >> cannot meet the needs of 2GB pages (which is an uncommon memory allocation
> >> requirement), then users might not care about such a warning message after
> >> the system boots. And if there is no allocation of pages greater than or
> >> equal to 2GB for a period of time in the future, the system will have no
> >> problems. But once some path allocates pages greater than or equal to 2GB,
> >> the system will go into chaos. And by that time, the system log may no
> >> longer have this warning message. Is that not the case?
> > 
> > It is.
> > 
> > I expect the warning to be reported early if we have configurations that
> > do not satisfy the alignment requirement even in absence of the crash.
> 
> If you’re saying the issue was only caught during
> testing, keep in mind that with KASLR enabled the
> warning is triggered at run-time; you can’t assume it
> will never appear in production.

Let's look at what architectures actually do with vmemmap.

On 64-bit machines, we want vmemmap to be naturally aligned to
accommodate 16GiB pages.

Assuming 64 byte struct page, it requires 256 MiB alignment for 4K
PAGE_SIZE, 64MiB for 16K PAGE_SIZE and 16MiB for 64K PAGE_SIZE.

Only 3 architectures support HVO (select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP):
loongarch, riscv and x86. We should make the feature conditional to HVO
to limit exposure.

I am not sure why arm64 is not in the club.

x86 aligns vmemmap to 1G - OK.

loongarch aligns vmemmap to PMD_SIZE does not fit us with 4K and 16K
PAGE_SIZE. It should be easily fixable. No KALSR.

riscv aligns vmemmap to section size (128MiB) which is not enough.
Again, easily fixable.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ