[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a8d14ca-0976-41cc-02cb-dd1680fa37ef@linux.ibm.com>
Date: Mon, 25 Apr 2022 20:14:58 +0530
From: Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>
Cc: "ying.huang@...el.com" <ying.huang@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
baolin.wang@...ux.alibaba.com, dave.hansen@...ux.intel.com,
shy828301@...il.com, weixugc@...gle.com, gthelen@...gle.com,
dan.j.williams@...el.com
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state
N_DEMOTION_TARGETS
On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <jvgediya@...ux.ibm.com> wrote:
>
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@...el.com wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine? And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node 0 1
>> 0: 10 40
>> 1: 40 10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>> for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
>
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
>
Hence the suggestion for the ability to override this from userspace.
Now, for example, we could build a system with memory from the remote
machine (memory inception in case of power which will mostly be plugged
in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
memory.
In the former case, we won't consider that for demotion with this series
because that is not instantiated via dax kmem. So yes definitely we
would need the ability to override this from userspace so that we could
put these remote memory NUMA nodes as demotion targets if we want.
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node 0 1 2
>> 0: 10 40 80
>> 1: 40 10 80
>> 2: 80 80 10
>>
>> 2) Once this new node brought online, without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
>
> I'd argue that is wrong. At this state you have a tiered memory system
> be it one with just DRAM. Using it as such is correct behavior that
> we should not be preventing. Sure some usecases wouldn't want that
> arrangement but some do want it.
>
> For your case we could add a heuristic along the lines of the demotion
> target should be at least as big as the starting point but that would
> be a bit hacky.
>
Hence the proposal to do a per node demotion target override with the
semantics that i explained here
https://lore.kernel.org/linux-mm/8735i1zurt.fsf@linux.ibm.com/
Let me know if that interface would be good to handle all the possible
demotion target configs we would want to have.
-aneesh
Powered by blists - more mailing lists