[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c576a992-5a50-5dd3-644c-a45d4338fc85@linux.ibm.com>
Date: Mon, 25 Apr 2022 13:39:19 +0530
From: Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
To: "ying.huang@...el.com" <ying.huang@...el.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>,
Wei Xu <weixugc@...gle.com>, Yang Shi <shy828301@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Davidlohr Bueso <dave@...olabs.net>
Cc: Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Greg Thelen <gthelen@...gle.com>,
MichalHocko <mhocko@...nel.org>,
Brice Goglin <brice.goglin@...il.com>
Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state
N_DEMOTION_TARGETS
On 4/25/22 11:40 AM, ying.huang@...el.com wrote:
> On Mon, 2022-04-25 at 09:20 +0530, Aneesh Kumar K.V wrote:
>> "ying.huang@...el.com" <ying.huang@...el.com> writes:
>>
>>> Hi, All,
>>>
>>> On Fri, 2022-04-22 at 16:30 +0530, Jagdish Gediya wrote:
>>>
>>> [snip]
>>>
>>>> I think it is necessary to either have per node demotion targets
>>>> configuration or the user space interface supported by this patch
>>>> series. As we don't have clear consensus on how the user interface
>>>> should look like, we can defer the per node demotion target set
>>>> interface to future until the real need arises.
>>>>
>>>> Current patch series sets N_DEMOTION_TARGET from dax device kmem
>>>> driver, it may be possible that some memory node desired as demotion
>>>> target is not detected in the system from dax-device kmem probe path.
>>>>
>>>> It is also possible that some of the dax-devices are not preferred as
>>>> demotion target e.g. HBM, for such devices, node shouldn't be set to
>>>> N_DEMOTION_TARGETS. In future, Support should be added to distinguish
>>>> such dax-devices and not mark them as N_DEMOTION_TARGETS from the
>>>> kernel, but for now this user space interface will be useful to avoid
>>>> such devices as demotion targets.
>>>>
>>>> We can add read only interface to view per node demotion targets
>>>> from /sys/devices/system/node/nodeX/demotion_targets, remove
>>>> duplicated /sys/kernel/mm/numa/demotion_target interface and instead
>>>> make /sys/devices/system/node/demotion_targets writable.
>>>>
>>>> Huang, Wei, Yang,
>>>> What do you suggest?
>>>
>>> We cannot remove a kernel ABI in practice. So we need to make it right
>>> at the first time. Let's try to collect some information for the kernel
>>> ABI definitation.
>>>
>>> The below is just a starting point, please add your requirements.
>>>
>>> 1. Jagdish has some machines with DRAM only NUMA nodes, but they don't
>>> want to use that as the demotion targets. But I don't think this is a
>>> issue in practice for now, because demote-in-reclaim is disabled by
>>> default.
>>
>> It is not just that the demotion can be disabled. We should be able to
>> use demotion on a system where we can find DRAM only NUMA nodes. That
>> cannot be achieved by /sys/kernel/mm/numa/demotion_enabled. It needs
>> something similar to to N_DEMOTION_TARGETS
>>
>
> Can you show NUMA information of your machines with DRAM-only nodes and
> PMEM nodes? We can try to find the proper demotion order for the
> system. If you can not show it, we can defer N_DEMOTION_TARGETS until
> the machine is available.
Sure will find one such config. As you might have noticed this is very
easy to have in a virtualization setup because the hypervisor can assign
memory to a guest VM from a numa node that doesn't have CPU assigned to
the same guest. This depends on the other guest VM instance config
running on the system. So on any virtualization config that has got
persistent memory attached, this can become an easy config to end up with.
>>> 2. For machines with PMEM installed in only 1 of 2 sockets, for example,
>>>
>>> Node 0 & 2 are cpu + dram nodes and node 1 are slow
>>> memory node near node 0,
>>>
>>> available: 3 nodes (0-2)
>>> node 0 cpus: 0 1
>>> node 0 size: n MB
>>> node 0 free: n MB
>>> node 1 cpus:
>>> node 1 size: n MB
>>> node 1 free: n MB
>>> node 2 cpus: 2 3
>>> node 2 size: n MB
>>> node 2 free: n MB
>>> node distances:
>>> node 0 1 2
>>> 0: 10 40 20
>>> 1: 40 10 80
>>> 2: 20 80 10
>>>
>>> We have 2 choices,
>>>
>>> a)
>>> node demotion targets
>>> 0 1
>>> 2 1
>>
>> This is achieved by
>>
>> [PATCH v2 1/5] mm: demotion: Set demotion list differently
>>
>>>
>>> b)
>>> node demotion targets
>>> 0 1
>>> 2 X
>>
>>
>>>
>>> a) is good to take advantage of PMEM. b) is good to reduce cross-socket
>>> traffic. Both are OK as defualt configuration. But some users may
>>> prefer the other one. So we need a user space ABI to override the
>>> default configuration.
>>>
>>> 3. For machines with HBM (High Bandwidth Memory), as in
>>>
>>> https://lore.kernel.org/lkml/39cbe02a-d309-443d-54c9-678a0799342d@gmail.com/
>>>
>>>> [1] local DDR = 10, remote DDR = 20, local HBM = 31, remote HBM = 41
>>>
>>> Although HBM has better performance than DDR, in ACPI SLIT, their
>>> distance to CPU is longer. We need to provide a way to fix this. The
>>> user space ABI is one way. The desired result will be to use local DDR
>>> as demotion targets of local HBM.
>>
>>
>> IMHO the above (2b and 3) can be done using per node demotion targets. Below is
>> what I think we could do with a single slow memory NUMA node 4.
>
> If we can use writable per-node demotion targets as ABI, then we don't
> need N_DEMOTION_TARGETS.
Not sure I understand that. Yes, once you have a writeable per node
demotion target it is easy to build any demotion order. But that doesn't
mean we should not improve the default unless you have reason to say
that using N_DEMOTTION_TARGETS breaks any existing config.
>
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 4
>> 4
>> 4
>>
>> /sys/devices/system/node# echo 1 > node1/demotion_targets
>> bash: echo: write error: Invalid argument
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 4
>> 4
>> 4
>>
>> /sys/devices/system/node# echo 0 > node1/demotion_targets
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 0
>> 4
>> 4
>>
>> /sys/devices/system/node# echo 1 > node0/demotion_targets
>> bash: echo: write error: Invalid argument
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 0
>> 4
>> 4
>>
>> Disable demotion for a specific node.
>> /sys/devices/system/node# echo > node1/demotion_targets
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>>
>> 4
>> 4
>>
>> Reset demotion to default
>> /sys/devices/system/node# echo -1 > node1/demotion_targets
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 4
>> 4
>> 4
>>
>> When a specific device/NUMA node is used for demotion target via the user interface, it is taken
>> out of other NUMA node targets.
>
> IMHO, we should be careful about interaction between auto-generated and
> overridden demotion order.
>
yes, we should avoid loop between that. But if you agree for the above
ABI we could go ahead and share the implementation code.
> Best Regards,
> Huang, Ying
>
>> root@...ntu-guest:/sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 4
>> 4
>> 4
>>
>> /sys/devices/system/node# echo 4 > node1/demotion_targets
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>>
>> 4
>>
>>
>>
>> If more than one node requies the same demotion target
>> /sys/devices/system/node# echo 4 > node0/demotion_targets
>> /sys/devices/system/node# cat node[0-4]/demotion_targets
>> 4
>> 4
>>
>>
>>
>> -aneesh
>
>
-aneesh
Powered by blists - more mailing lists