lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Apr 2022 15:00:46 +0800
From:   "ying.huang@...el.com" <ying.huang@...el.com>
To:     Jagdish Gediya <jvgediya@...ux.ibm.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     akpm@...ux-foundation.org, aneesh.kumar@...ux.ibm.com,
        baolin.wang@...ux.alibaba.com, dave.hansen@...ux.intel.com,
        dan.j.williams@...el.com, Yang Shi <shy828301@...il.com>,
        Wei Xu <weixugc@...gle.com>
Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state
 N_DEMOTION_TARGETS

On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote:
> Current implementation to find the demotion targets works
> based on node state N_MEMORY, however some systems may have
> dram only memory numa node which are N_MEMORY but not the
> right choices as demotion targets.
> 
> This patch series introduces the new node state
> N_DEMOTION_TARGETS, which is used to distinguish the nodes which
> can be used as demotion targets, node_states[N_DEMOTION_TARGETS]
> is used to hold the list of nodes which can be used as demotion
> targets, support is also added to set the demotion target
> list from user space so that default behavior can be overridden.

It appears that your proposed user space interface cannot solve all
problems.  For example, for system as follows,

Node 0 & 2 are cpu + dram nodes and node 1 are slow memory node near
node 0,

available: 3 nodes (0-2)
node 0 cpus: 0 1
node 0 size: n MB
node 0 free: n MB
node 1 cpus:
node 1 size: n MB
node 1 free: n MB
node 2 cpus: 2 3
node 2 size: n MB
node 2 free: n MB
node distances:
node   0   1   2
  0:  10  40  20
  1:  40  10  80
  2:  20  80  10

Demotion order 1:

node    demotion_target
 0              1
 1              X
 2              X

Demotion order 2:

node    demotion_target
 0              1
 1              X
 2              1

The demotion order 1 is preferred if we want to reduce cross-socket
traffic.  While the demotion order 2 is preferred if we want to take
full advantage of the slow memory node.  We can take any choice as
automatic-generated order, while make the other choice possible via user
space overridden.

I don't know how to implement this via your proposed user space
interface.  How about the following user space interface?

1. Add a file "demotion_order_override" in
        /sys/devices/system/node/

2. When read, "1" is output if the demotion order of the system has been
overridden; "0" is output if not.

3. When write "1", the demotion order of the system will become the
overridden mode.  When write "0", the demotion order of the system will
become the automatic mode and the demotion order will be re-generated. 

4. Add a file "demotion_targets" for each node in
        /sys/devices/system/node/nodeX/

5. When read, the demotion targets of nodeX will be output.

6. When write a node list to the file, the demotion targets of nodeX
will be set to the written nodes.  And the demotion order of the system
will become the overridden mode.

To reduce the complexity, the demotion order of the system is either in
overridden mode or automatic mode.  When converting from the automatic
mode to the overridden mode, the existing demotion targets of all nodes
will be retained before being changed.  When converting from overridden
mode to automatic mode, the demotion order of the system will be re-
generated automatically.

In overridden mode, the demotion targets of the hot-added and hot-
removed node will be set to empty.  And the hot-removed node will be
removed from the demotion targets of any node.

This is an extention of the interface used in the following patch,

https://lore.kernel.org/lkml/20191016221149.74AE222C@viggo.jf.intel.com/

What do you think about this?

> node state N_DEMOTION_TARGETS is also set from the dax kmem
> driver, certain type of memory which registers through dax kmem
> (e.g. HBM) may not be the right choices for demotion so in future
> they should be distinguished based on certain attributes and dax
> kmem driver should avoid setting them as N_DEMOTION_TARGETS,
> however current implementation also doesn't distinguish any 
> such memory and it considers all N_MEMORY as demotion targets
> so this patch series doesn't modify the current behavior.
> 

Best Regards,
Huang, Ying

[snip]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ