lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0a8449f4a428300a5143b6ea3a51b82@linux.dev>
Date:   Wed, 17 May 2023 02:49:47 +0000
From:   "Yajun Deng" <yajun.deng@...ux.dev>
To:     "Luck, Tony" <tony.luck@...el.com>,
        "Borislav Petkov" <bp@...en8.de>
Cc:     james.morse@....com, mchehab@...nel.org, rric@...nel.org,
        corbet@....net, linux-kernel@...r.kernel.org,
        linux-edac@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH] EDAC: Expose node link in sysfs if CONFIG_NUMA

May 17, 2023 1:25 AM, "Luck, Tony" <tony.luck@...el.com> wrote:

>>> If we have '/sys/devices/system/node/node0/mc0', by comparing the number
>>> of dimm and MemTotal in meminfo. It is easy to know that the dimm didn't
>>> recognized whether it belonged to this NUMA node or not.
>> 
>> mc != NUMA node.
> 
> Modern systems have multiple memory controllers per socket.
> On an Icelake server I see:
> 
> $ cd /sys/devices/system/edac/mc
> $ ls -l
> total 0
> drwxr-xr-x. 5 root root 0 May 16 10:13 mc0
> drwxr-xr-x. 3 root root 0 May 16 10:13 mc1
> drwxr-xr-x. 5 root root 0 May 16 10:13 mc2
> drwxr-xr-x. 3 root root 0 May 16 10:13 mc3
> drwxr-xr-x. 5 root root 0 May 16 10:13 mc4
> drwxr-xr-x. 3 root root 0 May 16 10:13 mc5
> drwxr-xr-x. 5 root root 0 May 16 10:13 mc6
> drwxr-xr-x. 3 root root 0 May 16 10:13 mc7
> drwxr-xr-x. 2 root root 0 May 16 10:13 power
> lrwxrwxrwx. 1 root root 0 May 16 03:11 subsystem -> ../../../../bus/edac
> -rw-r--r--. 1 root root 4096 May 16 03:11 uevent
> 
> But I can figure out the socket topology with:
> 
> $ grep . mc*/mc_name
> mc0/mc_name:Intel_10nm Socket#0 IMC#0
> mc1/mc_name:Intel_10nm Socket#0 IMC#1
> mc2/mc_name:Intel_10nm Socket#0 IMC#2
> mc3/mc_name:Intel_10nm Socket#0 IMC#3
> mc4/mc_name:Intel_10nm Socket#1 IMC#0
> mc5/mc_name:Intel_10nm Socket#1 IMC#1
> mc6/mc_name:Intel_10nm Socket#1 IMC#2
> mc7/mc_name:Intel_10nm Socket#1 IMC#3
> 
> I think this should help connect "mc*" to which node
> they belong to.
> 

Thanks! 
Yes, mc_name may show the NUMA id, it depends on the vendor edac modules.

On the other hand, this directory '/sys/devices/system/node/node0/' should
show all resources that belong to it. It already has cpu and memory symbolic
link. Memory controller also belongs to one NUMA. The memory controller
symbolic link should appear under node* directory.

> -Tony

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ