[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5bf65002-8d2f-4b9b-8f22-3ba69124335c@redhat.com>
Date: Mon, 4 Aug 2025 21:10:13 +0200
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>
Cc: Juan Yescas <jyescas@...gle.com>, akash.tyagi@...iatek.com,
Andrew Morton <akpm@...ux-foundation.org>,
angelogioacchino.delregno@...labora.com, hannes@...xchg.org,
Brendan Jackman <jackmanb@...gle.com>, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mediatek@...ts.infradead.org,
Linux Memory Management List <linux-mm@...ck.org>, matthias.bgg@...il.com,
Michal Hocko <mhocko@...e.com>, Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, wsd_upstream@...iatek.com,
Kalesh Singh <kaleshsingh@...gle.com>, "T.J. Mercier"
<tjmercier@...gle.com>, Isaac Manjarres <isaacmanjarres@...gle.com>
Subject: Re: [RFC PATCH] mm/page_alloc: Add PCP list for THP CMA
On 04.08.25 21:00, Zi Yan wrote:
> On 4 Aug 2025, at 14:49, David Hildenbrand wrote:
>
>> On 04.08.25 20:20, Juan Yescas wrote:
>>> Hi David/Zi,
>>>
>>> Is there any reason why the MIGRATE_CMA pages are not in the PCP lists?
>>>
>>> There are many devices that need fast allocation of MIGRATE_CMA pages,
>>> and they have to get them from the buddy allocator, which is a bit
>>> slower in comparison to the PCP lists.
>>>
>>> We also have cases where the MIGRATE_CMA memory requirements are big.
>>> For example, GPUs need MIGRATE_CMA memory in the ranges of 30MiB to 500MiBs.
>>> These cases would benefit if we have THPs for CMAs.
>>>
>>> Could we add the support for MIGRATE_CMA pages on the PCP and THP lists?
>>
>> Remember how CMA memory is used:
>>
>> The owner allocates it through cma_alloc() and friends, where the CMA allocator will try allocating *specific physical memory regions* using alloc_contig_range(). It doesn't just go ahead and pick a random CMA page from the buddy (or PCP) lists. Doesn't work (just imagine having different CMA areas etc).
>
> Yeah, unless some code is relying on gfp_to_alloc_flags_cma() to get ALLOC_CMA
> to try to get CMA pages from buddy.
Right, but that's just for internal purposes IIUC, to grab pages from
the CMA lists when serving movable allocations.
>
>>
>> Anybody else is free to use CMA pages for MOVABLE allocations. So we treat them as being MOVABLE on the PCP.
>>
>> Having a separate CMA PCP list doesn't solve or speedup anything, really.
>
> It can be slower when small CMA pages are on PCP lists and large CMA pages
> cannot be allocated, one needs to drain PCP lists. This assumes the code is
> trying to get CMA pages from buddy, which is not how CMA memory is designed
> to be used like David mentioned above.
Right. And alloc_contig_range_noprof() already does a
drain_all_pages(cc.zone).
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists