[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXIIOf7cHe9hzk0W@thinkstation>
Date: Thu, 22 Jan 2026 11:22:08 +0000
From: Kiryl Shutsemau <kas@...nel.org>
To: Zi Yan <ziy@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>, David Hildenbrand <david@...nel.org>,
Matthew Wilcox <willy@...radead.org>, Usama Arif <usamaarif642@...il.com>,
Frank van der Linden <fvdl@...gle.com>, Oscar Salvador <osalvador@...e.de>,
Mike Rapoport <rppt@...nel.org>, Vlastimil Babka <vbabka@...e.cz>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, kernel-team@...a.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCHv4 07/14] mm/sparse: Check memmap alignment for
compound_info_has_mask()
On Wed, Jan 21, 2026 at 12:58:36PM -0500, Zi Yan wrote:
> On 21 Jan 2026, at 11:22, Kiryl Shutsemau wrote:
>
> > If page->compound_info encodes a mask, it is expected that memmap to be
> > naturally aligned to the maximum folio size.
> >
> > Add a warning if it is not.
> >
> > A warning is sufficient as MAX_FOLIO_ORDER is very rarely used, so the
> > kernel is still likely to be functional if this strict check fails.
> >
> > Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
> > ---
> > include/linux/mmzone.h | 1 +
> > mm/sparse.c | 5 +++++
> > 2 files changed, 6 insertions(+)
> >
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 390ce11b3765..7e4f69b9d760 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -91,6 +91,7 @@
> > #endif
> >
> > #define MAX_FOLIO_NR_PAGES (1UL << MAX_FOLIO_ORDER)
> > +#define MAX_FOLIO_SIZE (PAGE_SIZE << MAX_FOLIO_ORDER)
> >
> > enum migratetype {
> > MIGRATE_UNMOVABLE,
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 17c50a6415c2..5f41a3edcc24 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -600,6 +600,11 @@ void __init sparse_init(void)
> > BUILD_BUG_ON(!is_power_of_2(sizeof(struct mem_section)));
> > memblocks_present();
> >
> > + if (compound_info_has_mask()) {
> > + WARN_ON(!IS_ALIGNED((unsigned long)pfn_to_page(0),
> > + MAX_FOLIO_SIZE / sizeof(struct page)));
> > + }
> > +
>
> 16GB is only possible in arm64 with 64KB base page. Would it be overkill
> to align vmemmap to it unconditionally? Or how likely will this cause
> false positive warning?
CMA can give you 16GiB page on x86.
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists