[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260108094812.8757ce3ad8370668eaafb29c@linux-foundation.org>
Date: Thu, 8 Jan 2026 09:48:12 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Cui Chao <cuichao1753@...tium.com.cn>
Cc: Jonathan Cameron <Jonathan.Cameron@...wei.com>, Mike Rapoport
<rppt@...nel.org>, Wang Yinfeng <wangyinfeng@...tium.com.cn>,
linux-cxl@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID
of CFMW
On Tue, 6 Jan 2026 11:10:42 +0800 Cui Chao <cuichao1753@...tium.com.cn> wrote:
> In some physical memory layout designs, the address space of CFMW
> resides between multiple segments of system memory belonging to
> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> of system memory are merged into a larger numa_memblk. When
> identifying which NUMA node the CFMW belongs to, it may be incorrectly
> assigned to the NUMA node of the merged system memory.
>
> Example memory layout:
>
> Physical address space:
> 0x00000000 - 0x1FFFFFFF System RAM (node0)
> 0x20000000 - 0x2FFFFFFF CXL CFMW (node2)
> 0x40000000 - 0x5FFFFFFF System RAM (node0)
> 0x60000000 - 0x7FFFFFFF System RAM (node1)
>
> After numa_cleanup_meminfo, the two node0 segments are merged into one:
> 0x00000000 - 0x5FFFFFFF System RAM (node0) // CFMW is inside the range
> 0x60000000 - 0x7FFFFFFF System RAM (node1)
>
> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
>
> To address this scenario, accurately identifying the correct NUMA node
> can be achieved by checking whether the region belongs to both
> numa_meminfo and numa_reserved_meminfo.
Thanks.
Can you please help us understand the userspace-visible runtime effects
of this incorrect assignment?
Powered by blists - more mailing lists