[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aKsSh0OEjf4GLmIG@kernel.org>
Date: Sun, 24 Aug 2025 16:24:23 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, Alexander Potapenko <glider@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>,
Christoph Lameter <cl@...two.org>, Dennis Zhou <dennis@...nel.org>,
Dmitry Vyukov <dvyukov@...gle.com>, dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org, iommu@...ts.linux.dev,
io-uring@...r.kernel.org, Jason Gunthorpe <jgg@...dia.com>,
Jens Axboe <axboe@...nel.dk>, Johannes Weiner <hannes@...xchg.org>,
John Hubbard <jhubbard@...dia.com>, kasan-dev@...glegroups.com,
kvm@...r.kernel.org, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-arm-kernel@...s.com, linux-arm-kernel@...ts.infradead.org,
linux-crypto@...r.kernel.org, linux-ide@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-mips@...r.kernel.org,
linux-mmc@...r.kernel.org, linux-mm@...ck.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-scsi@...r.kernel.org,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Marco Elver <elver@...gle.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Hocko <mhocko@...e.com>, Muchun Song <muchun.song@...ux.dev>,
netdev@...r.kernel.org, Oscar Salvador <osalvador@...e.de>,
Peter Xu <peterx@...hat.com>, Robin Murphy <robin.murphy@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Tejun Heo <tj@...nel.org>,
virtualization@...ts.linux.dev, Vlastimil Babka <vbabka@...e.cz>,
wireguard@...ts.zx2c4.com, x86@...nel.org, Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH RFC 12/35] mm: limit folio/compound page sizes in
problematic kernel configs
On Thu, Aug 21, 2025 at 10:06:38PM +0200, David Hildenbrand wrote:
> Let's limit the maximum folio size in problematic kernel config where
> the memmap is allocated per memory section (SPARSEMEM without
> SPARSEMEM_VMEMMAP) to a single memory section.
>
> Currently, only a single architectures supports ARCH_HAS_GIGANTIC_PAGE
> but not SPARSEMEM_VMEMMAP: sh.
>
> Fortunately, the biggest hugetlb size sh supports is 64 MiB
> (HUGETLB_PAGE_SIZE_64MB) and the section size is at least 64 MiB
> (SECTION_SIZE_BITS == 26), so their use case is not degraded.
>
> As folios and memory sections are naturally aligned to their order-2 size
> in memory, consequently a single folio can no longer span multiple memory
> sections on these problematic kernel configs.
>
> nth_page() is no longer required when operating within a single compound
> page / folio.
>
> Signed-off-by: David Hildenbrand <david@...hat.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> ---
> include/linux/mm.h | 22 ++++++++++++++++++----
> 1 file changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 77737cbf2216a..48a985e17ef4e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2053,11 +2053,25 @@ static inline long folio_nr_pages(const struct folio *folio)
> return folio_large_nr_pages(folio);
> }
>
> -/* Only hugetlbfs can allocate folios larger than MAX_ORDER */
> -#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
> -#define MAX_FOLIO_ORDER PUD_ORDER
> -#else
> +#if !defined(CONFIG_ARCH_HAS_GIGANTIC_PAGE)
> +/*
> + * We don't expect any folios that exceed buddy sizes (and consequently
> + * memory sections).
> + */
> #define MAX_FOLIO_ORDER MAX_PAGE_ORDER
> +#elif defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> +/*
> + * Only pages within a single memory section are guaranteed to be
> + * contiguous. By limiting folios to a single memory section, all folio
> + * pages are guaranteed to be contiguous.
> + */
> +#define MAX_FOLIO_ORDER PFN_SECTION_SHIFT
> +#else
> +/*
> + * There is no real limit on the folio size. We limit them to the maximum we
> + * currently expect.
> + */
> +#define MAX_FOLIO_ORDER PUD_ORDER
> #endif
>
> #define MAX_FOLIO_NR_PAGES (1UL << MAX_FOLIO_ORDER)
> --
> 2.50.1
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists