lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98c0b70a-0cbd-46fd-b481-7663905bb8dc@arm.com>
Date: Tue, 10 Jun 2025 20:46:38 +0100
From: Robin Murphy <robin.murphy@....com>
To: Catalin Marinas <catalin.marinas@....com>,
 Feng Tang <feng.tang@...ux.alibaba.com>
Cc: Will Deacon <will@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
 Yang Shi <yang@...amperecomputing.com>, Ryan Roberts <ryan.roberts@....com>,
 Baruch Siach <baruch@...s.co.il>, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] arm64/mm: Lift the cma address limit when
 CONFIG_DMA_NUMA_CMA=y

On 2025-06-10 6:10 pm, Catalin Marinas wrote:
> On Wed, May 21, 2025 at 09:47:01AM +0800, Feng Tang wrote:
>> When porting an cma related usage from x86_64 server to arm64 server,
>> the "cma=4G" setup failed on arm64, and the reason is arm64 has 4G (32bit)
>> address limit for cma reservation.
>>
>> The limit is reasonable due to device DMA requirement, but for NUMA
>> servers which have CONFIG_DMA_NUMA_CMA enabled, the limit is not required
>> as that config already allows cma area to be reserved on different NUMA
>> nodes whose memory very likely goes beyond 4G limit.
>>
>> Lift the cma limit for platform with such configuration.
> 
> I don't think that's the right fix. Those devices that have a NUMA node
> associated may be ok to address memory beyond 4GB. The default for
> NUMA_NO_NODE devices is still dma_contiguous_default_area. I also don't
> like to make such run-time decisions on the config option.

Indeed, the fact that the kernel was built with the option enabled says 
nothing at all about the needs of whatever system we're actually running 
on, so that's definitely wrong. This one is also the kind of option 
which may well be enabled in a multi-platform distro kernel, since it 
only adds a tiny amount of code with no functional impact on systems 
which don't explicitly opt in, but offers a useful benefit to those 
which can and do.

Furthermore, the justification doesn't add up at all - if the relevant 
devices could use the per-NUMA-node CMA areas, then... why not just have 
them use the per-NUMA-node CMA areas, no kernel change needed (and maybe 
a slight performance bonus too)? On the other hand, where those areas 
may or may not be allocated is entirely meaningless to NUMA_NO_NODE 
devices which wouldn't use them anyway.

> That said, maybe we should make the under-4G CMA allocation a best
> effort. In the arch code, if that failed, attempt the allocation again
> with a limit of 0 and maybe do a pr_notice() that CMA allocation in the
> DMA zone failed.

TBH given that the command-line parameter can specify placement as well 
as size, I think it would make a lot of sense to allow that to override 
the default limit provided by the arch code. That would give users the 
most flexibility, at the minor cost of having to accept the consequences 
if they do specify something which ends up not working for some devices. 
Otherwise I fear that any attempt to make the code itself cleverer will 
just lead down a rabbit-hole of trying to second-guess the user's intent 
- if the size doesn't fit the limit, who says it's right to increase the 
limit rather than reduce the size? And so on...

Thanks,
Robin.

> 
> Adding Robin in case he has a different view.
> 
>> Signed-off-by: Feng Tang <feng.tang@...ux.alibaba.com>
>> ---
>>   arch/arm64/mm/init.c | 9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index b99bf3980fc6..661758678cc4 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -312,6 +312,7 @@ void __init arm64_memblock_init(void)
>>   void __init bootmem_init(void)
>>   {
>>   	unsigned long min, max;
>> +	phys_addr_t cma_limit;
>>   
>>   	min = PFN_UP(memblock_start_of_DRAM());
>>   	max = PFN_DOWN(memblock_end_of_DRAM());
>> @@ -343,8 +344,14 @@ void __init bootmem_init(void)
>>   
>>   	/*
>>   	 * Reserve the CMA area after arm64_dma_phys_limit was initialised.
>> +	 *
>> +	 * When CONFIG_DMA_NUMA_CMA is enabled, system may have CMA reserved
>> +	 * area in different NUMA nodes, which likely goes beyond the 32bit
>> +	 * limit, thus use (PHYS_MASK+1) as cma limit.
>>   	 */
>> -	dma_contiguous_reserve(arm64_dma_phys_limit);
>> +	cma_limit = IS_ENABLED(CONFIG_DMA_NUMA_CMA) ?
>> +			(PHYS_MASK + 1) : arm64_dma_phys_limit;
>> +	dma_contiguous_reserve(cma_limit);
>>   
>>   	/*
>>   	 * request_standard_resources() depends on crashkernel's memory being
>> -- 
>> 2.39.5 (Apple Git-154)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ