lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALzOmR2z0noh74aCAd=QVUBgdn7Q+Hbevs-cr1EtV_zXCuQ=PA@mail.gmail.com>
Date: Fri, 9 Jan 2026 15:05:31 +0530
From: Pratyush Brahma <pratyush.brahma@....qualcomm.com>
To: Cui Chao <cuichao1753@...tium.com.cn>
Cc: Wang Yinfeng <wangyinfeng@...tium.com.cn>, linux-cxl@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Cameron <jonathan.cameron@...wei.com>,
        Mike Rapoport <rppt@...nel.org>
Subject: Re: [PATCH v2 1/1] mm: numa_memblks: Identify the accurate NUMA ID of CFMW

On Fri, Jan 9, 2026 at 12:44 PM Cui Chao <cuichao1753@...tium.com.cn> wrote:
>
> In some physical memory layout designs, the address space of CFMW
> resides between multiple segments of system memory belonging to
> the same NUMA node. In numa_cleanup_meminfo, these multiple segments
> of system memory are merged into a larger numa_memblk. When
> identifying which NUMA node the CFMW belongs to, it may be incorrectly
> assigned to the NUMA node of the merged system memory.
>
> Example memory layout:
>
> Physical address space:
>     0x00000000 - 0x1FFFFFFF  System RAM (node0)
>     0x20000000 - 0x2FFFFFFF  CXL CFMW (node2)
>     0x40000000 - 0x5FFFFFFF  System RAM (node0)
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
>
> After numa_cleanup_meminfo, the two node0 segments are merged into one:
>     0x00000000 - 0x5FFFFFFF  System RAM (node0) // CFMW is inside the range
>     0x60000000 - 0x7FFFFFFF  System RAM (node1)
>
> So the CFMW (0x20000000-0x2FFFFFFF) will be incorrectly assigned to node0.
>
> To address this scenario, accurately identifying the correct NUMA node
> can be achieved by checking whether the region belongs to both
> numa_meminfo and numa_reserved_meminfo.
>
> Signed-off-by: Cui Chao <cuichao1753@...tium.com.cn>
> Reviewed-by: Jonathan Cameron <jonathan.cameron@...wei.com>
> ---
>  mm/numa_memblks.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
> index 5b009a9cd8b4..e91908ed8661 100644
> --- a/mm/numa_memblks.c
> +++ b/mm/numa_memblks.c
> @@ -568,15 +568,16 @@ static int meminfo_to_nid(struct numa_meminfo *mi, u64 start)
>  int phys_to_target_node(u64 start)
>  {
>         int nid = meminfo_to_nid(&numa_meminfo, start);
> +       int reserved_nid = meminfo_to_nid(&numa_reserved_meminfo, start);
>
>         /*
>          * Prefer online nodes, but if reserved memory might be
>          * hot-added continue the search with reserved ranges.
It would be good to change this comment as well. With the new logic
you’re not just "continuing the search", you’re explicitly preferring
reserved on overlap.
Probably something like "Prefer numa_meminfo unless the address is
also described by reserved ranges, in which case use the reserved
nid."
>          */
> -       if (nid != NUMA_NO_NODE)
> +       if (nid != NUMA_NO_NODE && reserved_nid == NUMA_NO_NODE)
>                 return nid;
>
> -       return meminfo_to_nid(&numa_reserved_meminfo, start);
> +       return reserved_nid;
>  }
>  EXPORT_SYMBOL_GPL(phys_to_target_node);
>
> --
> 2.33.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ