lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dac83ef4-6e4d-421b-bd54-7090d2f963d9@phytium.com.cn>
Date: Mon, 5 Jan 2026 10:38:30 +0800
From: Cui Chao <cuichao1753@...tium.com.cn>
To: Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Joanthan Cameron <Jonathan.Cameron@...wei.com>, wangyinfeng@...tium.com.cn,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: numa_memblks: Identify the accurate NUMA ID of CFMW

Hi,

Thank you for your review.

On 12/30/2025 11:18 PM, Mike Rapoport wrote:
> Hi,
>
> On Tue, Dec 30, 2025 at 05:27:50PM +0800, Cui Chao wrote:
>> In some physical memory layout designs, the address space of CFMW
>> resides between multiple segments of system memory belonging to
>> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
>> of system memory are merged into a larger numa_memblk. When
>> identifying which NUMA node the CFMW belongs to, it may be incorrectly
>> assigned to the NUMA node of the merged system memory. To address this
> Can you please provide an example of such memory layout?


Example memory layout:

Physical address space:
     0x00000000 - 0x1FFFFFFF  System RAM (node0)
     0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
     0x40000000 - 0x5FFFFFFF  System RAM (node0)
     0x60000000 - 0x7FFFFFFF  System RAM (node1)

After numa_cleanup_meminfo, the two node0 segments are merged into one:
     0x00000000 - 0x5FFFFFFF  System RAM (node0)  // CFMW is inside this 
range
     0x60000000 - 0x7FFFFFFF  System RAM (node1)

So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.


>> scenario, accurately identifying the correct NUMA node can be achieved
>> by checking whether the region belongs to both numa_meminfo and
>> numa_reserved_meminfo.
>>
>> Signed-off-by: Cui Chao <cuichao1753@...tium.com.cn>
>> ---
>>   mm/numa_memblks.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
>> index 5b009a9cd8b4..1ef037f0e0e0 100644
>> --- a/mm/numa_memblks.c
>> +++ b/mm/numa_memblks.c
>> @@ -573,7 +573,8 @@ int phys_to_target_node(u64 start)
>>   	 * Prefer online nodes, but if reserved memory might be
>>   	 * hot-added continue the search with reserved ranges.
>>   	 */
>> -	if (nid != NUMA_NO_NODE)
>> +	if (nid != NUMA_NO_NODE &&
>> +		meminfo_to_nid(&numa_reserved_meminfo, start) == NUMA_NO_NODE)
> I'd suggest assigning the result of meminfo_to_nid(&numa_reserved_meminfo,
> start) to a local variable and using that in if and return statements.


I will use a local variable named reserved_nid.


>>   		return nid;
>>   
>>   	return meminfo_to_nid(&numa_reserved_meminfo, start);
>> -- 
>> 2.33.0
>>
>>
-- 
Best regards,
Cui Chao.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ