[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aEhnELJQLw8S8Bho@arm.com>
Date: Tue, 10 Jun 2025 18:10:40 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Feng Tang <feng.tang@...ux.alibaba.com>
Cc: Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Yang Shi <yang@...amperecomputing.com>,
Ryan Roberts <ryan.roberts@....com>,
Baruch Siach <baruch@...s.co.il>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Robin Murphy <robin.murphy@....com>
Subject: Re: [PATCH RFC] arm64/mm: Lift the cma address limit when
CONFIG_DMA_NUMA_CMA=y
On Wed, May 21, 2025 at 09:47:01AM +0800, Feng Tang wrote:
> When porting an cma related usage from x86_64 server to arm64 server,
> the "cma=4G" setup failed on arm64, and the reason is arm64 has 4G (32bit)
> address limit for cma reservation.
>
> The limit is reasonable due to device DMA requirement, but for NUMA
> servers which have CONFIG_DMA_NUMA_CMA enabled, the limit is not required
> as that config already allows cma area to be reserved on different NUMA
> nodes whose memory very likely goes beyond 4G limit.
>
> Lift the cma limit for platform with such configuration.
I don't think that's the right fix. Those devices that have a NUMA node
associated may be ok to address memory beyond 4GB. The default for
NUMA_NO_NODE devices is still dma_contiguous_default_area. I also don't
like to make such run-time decisions on the config option.
That said, maybe we should make the under-4G CMA allocation a best
effort. In the arch code, if that failed, attempt the allocation again
with a limit of 0 and maybe do a pr_notice() that CMA allocation in the
DMA zone failed.
Adding Robin in case he has a different view.
> Signed-off-by: Feng Tang <feng.tang@...ux.alibaba.com>
> ---
> arch/arm64/mm/init.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index b99bf3980fc6..661758678cc4 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -312,6 +312,7 @@ void __init arm64_memblock_init(void)
> void __init bootmem_init(void)
> {
> unsigned long min, max;
> + phys_addr_t cma_limit;
>
> min = PFN_UP(memblock_start_of_DRAM());
> max = PFN_DOWN(memblock_end_of_DRAM());
> @@ -343,8 +344,14 @@ void __init bootmem_init(void)
>
> /*
> * Reserve the CMA area after arm64_dma_phys_limit was initialised.
> + *
> + * When CONFIG_DMA_NUMA_CMA is enabled, system may have CMA reserved
> + * area in different NUMA nodes, which likely goes beyond the 32bit
> + * limit, thus use (PHYS_MASK+1) as cma limit.
> */
> - dma_contiguous_reserve(arm64_dma_phys_limit);
> + cma_limit = IS_ENABLED(CONFIG_DMA_NUMA_CMA) ?
> + (PHYS_MASK + 1) : arm64_dma_phys_limit;
> + dma_contiguous_reserve(cma_limit);
>
> /*
> * request_standard_resources() depends on crashkernel's memory being
> --
> 2.39.5 (Apple Git-154)
Powered by blists - more mailing lists