[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c0d8aa8-cac7-4679-aece-af88e8129345@sk.com>
Date: Fri, 7 Mar 2025 20:46:46 +0900
From: Honggyu Kim <honggyu.kim@...com>
To: Gregory Price <gourry@...rry.net>
Cc: kernel_team@...ynix.com, Joshua Hahn <joshua.hahnjy@...il.com>,
harry.yoo@...cle.com, ying.huang@...ux.alibaba.com,
gregkh@...uxfoundation.org, rakie.kim@...com, akpm@...ux-foundation.org,
rafael@...nel.org, lenb@...nel.org, dan.j.williams@...el.com,
Jonathan.Cameron@...wei.com, dave.jiang@...el.com, horen.chuang@...ux.dev,
hannes@...xchg.org, linux-kernel@...r.kernel.org,
linux-acpi@...r.kernel.org, linux-mm@...ck.org, kernel-team@...a.com,
yunjeong.mun@...com
Subject: Re: [PATCH 2/2 v6] mm/mempolicy: Don't create weight sysfs for
memoryless nodes
On 3/7/2025 2:32 AM, Gregory Price wrote:
> On Thu, Mar 06, 2025 at 09:39:26PM +0900, Honggyu Kim wrote:
>>
>> The memoryless nodes are printed as follows after those ACPI, SRAT,
>> Node N PXM M messages.
>>
>> [ 0.010927] Initmem setup node 0 [mem
>> 0x0000000000001000-0x000000207effffff]
>> [ 0.010930] Initmem setup node 1 [mem
>> 0x0000060f80000000-0x0000064f7fffffff]
>> [ 0.010992] Initmem setup node 2 as memoryless
>> [ 0.011055] Initmem setup node 3 as memoryless
>> [ 0.011115] Initmem setup node 4 as memoryless
>> [ 0.011177] Initmem setup node 5 as memoryless
>> [ 0.011238] Initmem setup node 6 as memoryless
>> [ 0.011299] Initmem setup node 7 as memoryless
>> [ 0.011361] Initmem setup node 8 as memoryless
>> [ 0.011422] Initmem setup node 9 as memoryless
>> [ 0.011484] Initmem setup node 10 as memoryless
>> [ 0.011544] Initmem setup node 11 as memoryless
>>
>> This is related why the 12 nodes at sysfs knobs are provided with the
>> current N_POSSIBLE loop.
>>
>
> This isn't actually why, this is another symptom. This gets printed
> because someone is marking nodes 4-11 as possible and setup_nr_node_ids
> reports 12 total nodes
>
> void __init setup_nr_node_ids(void)
> {
> unsigned int highest;
>
> highest = find_last_bit(node_possible_map.bits, MAX_NUMNODES);
> nr_node_ids = highest + 1;
> }
>
> Given your configuration data so far, we may have a bug somewhere (or
> i'm missing a configuration piece).
Maybe there could be some misunderstanding on this issue.
This isn't a problem of NUMA detection for CXL memory but just a problem
of number of "node" knobs only for weighted interleave.
The number of nodes in 'numactl -H' shows the correct nodes even without
our fix.
$ numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 ...
node 0 size: 128504 MB
node 0 free: 118563 MB
node 1 cpus: 144 145 146 147 ...
node 1 size: 257961 MB
node 1 free: 242628 MB
node 2 cpus:
node 2 size: 393216 MB
node 2 free: 393216 MB
node 3 cpus:
node 3 size: 524288 MB
node 3 free: 524288 MB
node distances:
node 0 1 2 3
0: 10 21 14 24
1: 21 10 24 14
2: 14 24 10 26
3: 24 14 26 10
You can see more info below.
$ cd /sys/devices/system/node
$ ls -d node*
node0 node1 node2 node3
$ cat possible
0-11
$ cat online
0-3
$ cat has_memory
0-3
$ cat has_normal_memory
0-1
$ cat has_cpu
0-1
>>> Basically I need to know:
>>> 1) Is each CXL device on a dedicated Host Bridge?
>>> 2) Is inter-host-bridge interleaving configured?
>>> 3) Is intra-host-bridge interleaving configured?
>>> 4) Do SRAT entries exist for all nodes?
>>
>> Are there some simple commands that I can get those info?
>>
>
> The content of the CEDT would be sufficient - that will show us the
> number of CXL host bridges.
Which command do we need for this info specifically? My output doesn't
provide some useful info for that.
$ acpidump -b
$ iasl -d *
$ cat cedt.dsl
...
**** Unknown ACPI table signature [CEDT]
>
>>> 5) Why are there 12 nodes but only 10 sources? Are there additional
>>> devices left out of your diagram? Are there 2 CFMWS but and 8 Memory
>>> Affinity records - resulting in 10 nodes? This is strange.
>>
>> My blind guess is that there could be a logic node that combines 4ch of
>> CXL memory so there are 5 nodes per each socket. Adding 2 nodes for
>> local CPU/DRAM makes 12 nodes in total.
>>
>
> The issue is that nodes have associated memory regions. If there are
> multiple nodes with overlapping memory regions, that seems problematic.
>
> If there are "possible nodes" without memory and no real use case
> (because the memory is associated with the aggregate node) then those
> nodes probably shouldn't be reported as possible.
>
> the tl;dr here is we should figure out what is marking those nodes as
> possible.
>
>> Not sure about this part but our approach with hotplug_memory_notifier()
>> resolves this problem. Rakie will submit an initial working patchset
>> soonish.
>
> This may just be a bandaid on the issue. We should get our node
> configuration correct from the get-go.
Not sure about it. This must be fixed ASAP because current kernel is
broken on this issue and the fix should go into hotfix tree first.
If you can think this is just a bandaid, but leaving it bleeding as is
not the right approach.
Our fix was posted a few hours ago. Please have a look, then think
about the apprach again.
https://lore.kernel.org/linux-mm/20250307063534.540-1-rakie.kim@sk.com
Thanks,
Honggyu
>
> ~Gregory
Powered by blists - more mailing lists