lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Apr 2022 16:45:38 +0530
From:   Jagdish Gediya <jvgediya@...ux.ibm.com>
To:     "ying.huang@...el.com" <ying.huang@...el.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, baolin.wang@...ux.alibaba.com,
        dave.hansen@...ux.intel.com, aneesh.kumar@...ux.ibm.com,
        shy828301@...il.com, weixugc@...gle.com, gthelen@...gle.com,
        dan.j.williams@...el.com
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state
 N_DEMOTION_TARGETS

On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@...el.com wrote:
> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > only NUMA node which are also N_MEMORY. As the current demotion
> > target finding algorithm works based on N_MEMORY and best distance,
> > it will choose DRAM only NUMA node as demotion target instead of
> > persistent memory node on such systems. If DRAM only NUMA node is
> > filled with demoted pages then at some point new allocations can
> > start falling to persistent memory, so basically cold pages are in
> > fast memor (due to demotion) and new pages are in slow memory, this
> > is why persistent memory nodes should be utilized for demotion and
> > dram node should be avoided for demotion so that they can be used
> > for new allocations.
> > 
> > Current implementation can work fine on the system where the memory
> > only numa nodes are possible only for persistent/slow memory but it
> > is not suitable for the like of systems mentioned above.
> 
> Can you share the NUMA topology information of your machine?  And the
> demotion order before and after your change?
> 
> Whether it's good to use the PMEM nodes as the demotion targets of the
> DRAM-only node too?

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node   0   1
  0:  10  40
  1:  40  10

1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
   for 0 even when 1 is DRAM node and there is no demotion targets for 1.

$ cat /sys/bus/nd/devices/dax0.0/target_node
2
$
# cd /sys/bus/dax/drivers/
:/sys/bus/dax/drivers# ls
device_dax  kmem
:/sys/bus/dax/drivers# cd device_dax/
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
:/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
:/sys/bus/dax/drivers/device_dax# numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13380 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1961 MB
node 2 cpus:
node 2 size: 0 MB
node 2 free: 0 MB
node distances:
node   0   1   2
  0:  10  40  80
  1:  40  10  80
  2:  80  80  10

2) Once this new node brought online,  without N_DEMOTION_TARGETS
patch series, 1 is demotion target for 0 and 2 is demotion target
for 1.

With this patch series applied,
1) No demotion target for either 0 or 1 before dax device is online
2) 2 is demotion target for both 0 and 1 after dax device is online.

> Best Regards,
> Huang, Ying
> 
> > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > which is used to distinguish the nodes which can be used as demotion
> > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > nodes which can be used as demotion targets.
> > 
> > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > certain type of memory which registers through dax kmem (e.g. HBM)
> > may not be the right choices for demotion so in future they should
> > be distinguished based on certain attributes and dax kmem driver
> > should avoid setting them as N_DEMOTION_TARGETS, however current
> > implementation also doesn't distinguish any  such memory and it
> > considers all N_MEMORY as demotion targets so this patch series
> > doesn't modify the current behavior.
> > 
> > below command can be used to view the available demotion targets in
> > the system,
> > 
> > $ cat /sys/devices/system/node/demotion_targets
> > 
> > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > It may be possible that some memory node desired as demotion target
> > is not detected in the system from dax-device kmem probe path. It is
> > also possible that some of the dax-devices are not preferred as
> > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > target list from user space so that default behavior can be overridden
> > to avoid or add specific node to demotion targets manually.
> > 
> > Override the demotion targets in the system (which sets the
> > node_states[N_DEMOTION_TARGETS] in kernel),
> > $ echo <node list> > /sys/devices/system/node/demotion_targets
> > 
> > As by default node attributes under /sys/devices/system/node/ are read-
> > only, support is added to write node_states[] via sysfs so that
> > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > sysfs.
> > 
> > It is also helpful to know per node demotion target path prepared by
> > kernel to understand the demotion behaviour during reclaim, so this
> > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > interface to view per-node demotion targets via sysfs.
> > 
> > Current code which sets migration targets is modified in
> > this patch series to avoid some of the limitations on the demotion
> > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > finding demotion targets.
> > 
> > Changelog
> > ----------
> > 
> > v2:
> > In v1, only 1st patch of this patch series was sent, which was
> > implemented to avoid some of the limitations on the demotion
> > target sharing, however for certain numa topology, the demotion
> > targets found by that patch was not most optimal, so 1st patch
> > in this series is modified according to suggestions from Huang
> > and Baolin. Different examples of demotion list comparasion
> > between existing implementation and changed implementation can
> > be found in the commit message of 1st patch.
> > 
> > v3:
> > - Modify patch 1 subject to make it more specific
> > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> >   /sys/devices/system/node/demotion_targets instead and make
> >   it writable to override node_states[N_DEMOTION_TARGETS].
> > - Add support to view per node demotion targets via sysfs
> > 
> > Jagdish Gediya (8):
> >   mm: demotion: Fix demotion targets sharing among sources
> >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> >   drivers/base/node: Add support to write node_states[] via sysfs
> >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> >   mm: demotion: expose per-node demotion targets via sysfs
> >   docs: numa: Add documentation for demotion
> > 
> >  Documentation/admin-guide/mm/index.rst        |  1 +
> >  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
> >  drivers/base/node.c                           | 70 ++++++++++++++++---
> >  drivers/dax/kmem.c                            |  2 +
> >  include/linux/migrate.h                       |  1 +
> >  include/linux/nodemask.h                      |  1 +
> >  mm/migrate.c                                  | 54 ++++++++++----
> >  7 files changed, 162 insertions(+), 24 deletions(-)
> >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> > 
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ