[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <71F051F2-5F3B-40A5-9347-BA2D93F2FF3F@linux.dev>
Date: Thu, 22 Jan 2026 11:10:26 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Kiryl Shutsemau <kas@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Usama Arif <usamaarif642@...il.com>,
Frank van der Linden <fvdl@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Mike Rapoport <rppt@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Zi Yan <ziy@...dia.com>,
Baoquan He <bhe@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Jonathan Corbet <corbet@....net>,
kernel-team@...a.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org
Subject: Re: [PATCHv4 07/14] mm/sparse: Check memmap alignment for
compound_info_has_mask()
> On Jan 22, 2026, at 00:22, Kiryl Shutsemau <kas@...nel.org> wrote:
>
> If page->compound_info encodes a mask, it is expected that memmap to be
> naturally aligned to the maximum folio size.
>
> Add a warning if it is not.
>
> A warning is sufficient as MAX_FOLIO_ORDER is very rarely used, so the
> kernel is still likely to be functional if this strict check fails.
>
> Signed-off-by: Kiryl Shutsemau <kas@...nel.org>
> ---
> include/linux/mmzone.h | 1 +
> mm/sparse.c | 5 +++++
> 2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 390ce11b3765..7e4f69b9d760 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -91,6 +91,7 @@
> #endif
>
> #define MAX_FOLIO_NR_PAGES (1UL << MAX_FOLIO_ORDER)
> +#define MAX_FOLIO_SIZE (PAGE_SIZE << MAX_FOLIO_ORDER)
>
> enum migratetype {
> MIGRATE_UNMOVABLE,
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 17c50a6415c2..5f41a3edcc24 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -600,6 +600,11 @@ void __init sparse_init(void)
> BUILD_BUG_ON(!is_power_of_2(sizeof(struct mem_section)));
> memblocks_present();
>
> + if (compound_info_has_mask()) {
> + WARN_ON(!IS_ALIGNED((unsigned long)pfn_to_page(0),
> + MAX_FOLIO_SIZE / sizeof(struct page)));
I still have concerns about this. If certain architectures or configurations,
especially when KASLR is enabled, do not meet the requirements during the
boot stage, only specific folios larger than a certain size might end up with
incorrect struct page entries as the system runs. How can we detect issues
arising from either updating the struct page or making incorrect logical
judgments based on information retrieved from the struct page?
After all, when we see this warning, we don't know when or if a problem will
occur in the future. It's like a time bomb in the system, isn't it? Therefore,
I would like to add a warning check to the memory allocation place, for
example:
WARN_ON(!IS_ALIGNED((unsigned long)&folio->page, folio_size / sizeof(struct page)));
However, in order to minimize the impact on the buddy allocator, I personally
suggest changing `WARN_ON` to `BUG_ON` here, and changing the checked size from
`MAX_FOLIO_SIZE` to the maximum size that the buddy can allocate, in order to
ensure the buddy allocator works fine. This check requirement is much weaker
than `MAX_FOLIO_SIZE`, so I think `BUG_ON` should not be easily triggered and
therefore should not be a problem. For non-buddy allocator interfaces (such
as `folio_alloc_gigantic` or `cma_alloc_folio`, etc.), we need to check whether
their struct page address is aligned as required. If it is not aligned, return
failure and print relevant prompt information to remind the reason.
> + }
> +
> pnum_begin = first_present_section_nr();
> nid_begin = sparse_early_nid(__nr_to_section(pnum_begin));
>
> --
> 2.51.2
>
Powered by blists - more mailing lists