[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75e1f8a6-19c4-4fe1-9ab8-e732a5fdfa5e@kernel.org>
Date: Thu, 5 Feb 2026 12:29:02 +0100
From: "David Hildenbrand (arm)" <david@...nel.org>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>
Cc: Rik van Riel <riel@...riel.com>, Usama Arif <usamaarif642@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
hannes@...xchg.org, shakeel.butt@...ux.dev, kas@...nel.org,
baohua@...nel.org, dev.jain@....com, baolin.wang@...ux.alibaba.com,
npache@...hat.com, Liam.Howlett@...cle.com, ryan.roberts@....com,
vbabka@...e.cz, lance.yang@...ux.dev, linux-kernel@...r.kernel.org,
kernel-team@...a.com, Frank van der Linden <fvdl@...gle.com>
Subject: Re: [RFC 00/12] mm: PUD (1GB) THP implementation
On 2/4/26 11:56, Lorenzo Stoakes wrote:
> On Mon, Feb 02, 2026 at 10:50:35AM -0500, Zi Yan wrote:
>> On 2 Feb 2026, at 6:30, Lorenzo Stoakes wrote:
>>
>>>
>>> That link doesn't work?
>>>
>>> Did a quick search for CMA balancing on lore, couldn't find anything, could you
>>> provide a lore link?
>>
>> https://lwn.net/Articles/1038263/
>>
>>>
>>>
>>> I'm not really in favour of this kind of approach. There's plenty of things that
>>> were considered 'temporary' upstream that became rather permanent :)
>>>
>>> Maybe we can't cover all corner-cases, but we need to make sure whatever we do
>>> send upstream is maintainable, conceptually sensible and doesn't paint us into
>>> any corners, etc.
>>>
>>>
>>> Could you expand on that?
>>
>> I also would like to hear David’s opinion on using CMA for 1GB THP.
>> He did not like it[1] when I posted my patch back in 2020, but it has
>> been more than 5 years. :)
>
> Yes please David :)
Heh, read Zi's mail first :)
>
> I find the idea of using the CMA for this a bit gross. And I fear we're
> essentially expanding the hacks for DAX to everyone.
Jup.
>
> Again I really feel that we should be tackling technical debt here, rather
> than adding features on shaky foundations and just making things worse.
>
Jup.
> We are inundated with series-after-series for THP trying to add features
> but really not very many that are tackling this debt, and I think it's time
> to get firmer about that.
Almost nobody wants do cleanups because there is the believe that only
features are important; and some companies seem to value features more
than cleanups when it comes to promotions etc.
And cleanups in that area are hard, because you'll very likely just
break stuff because it's all so weirdly interconnected.
See max_ptes_none discussion ...
>
>>
>> The other direction I explored is to get 1GB THP from buddy allocator.
>> That means we need to:
>> 1. bump MAX_PAGE_ORDER to 18 or make it a runtime variable so that only 1GB
>> THP users need to bump it,
>
> Would we need to bump the page block size too to stand more of a chance of
> avoiding fragmentation?
We discussed one idea of another level of anti-fragmentation on top (I
forgot how we called it, essentially bigger blocks that group pages in
the buddy). But implementing that is non trivial.
But long-term we really need something better than pageblocks and using
hacky CMA reservations for anything larger.
--
Cheers,
David
Powered by blists - more mailing lists