[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAPL-u9=-OHuUk=ZkNRDf3Dm_+3cBd2APL5MQpQr3_sVk_voJg@mail.gmail.com>
Date: Wed, 20 Apr 2022 22:41:13 -0700
From: Wei Xu <weixugc@...gle.com>
To: Yang Shi <shy828301@...il.com>
Cc: "ying.huang@...el.com" <ying.huang@...el.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Greg Thelen <gthelen@...gle.com>
Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
On Wed, Apr 20, 2022 at 8:12 PM Yang Shi <shy828301@...il.com> wrote:
>
> On Thu, Apr 14, 2022 at 12:00 AM ying.huang@...el.com
> <ying.huang@...el.com> wrote:
> >
> > On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote:
> > > Current implementation to find the demotion targets works
> > > based on node state N_MEMORY, however some systems may have
> > > dram only memory numa node which are N_MEMORY but not the
> > > right choices as demotion targets.
> > >
> > > This patch series introduces the new node state
> > > N_DEMOTION_TARGETS, which is used to distinguish the nodes which
> > > can be used as demotion targets, node_states[N_DEMOTION_TARGETS]
> > > is used to hold the list of nodes which can be used as demotion
> > > targets, support is also added to set the demotion target
> > > list from user space so that default behavior can be overridden.
> >
> > It appears that your proposed user space interface cannot solve all
> > problems. For example, for system as follows,
> >
> > Node 0 & 2 are cpu + dram nodes and node 1 are slow memory node near
> > node 0,
> >
> > available: 3 nodes (0-2)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus:
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 2 3
> > node 2 size: n MB
> > node 2 free: n MB
> > node distances:
> > node 0 1 2
> > 0: 10 40 20
> > 1: 40 10 80
> > 2: 20 80 10
> >
> > Demotion order 1:
> >
> > node demotion_target
> > 0 1
> > 1 X
> > 2 X
> >
> > Demotion order 2:
> >
> > node demotion_target
> > 0 1
> > 1 X
> > 2 1
> >
> > The demotion order 1 is preferred if we want to reduce cross-socket
> > traffic. While the demotion order 2 is preferred if we want to take
> > full advantage of the slow memory node. We can take any choice as
> > automatic-generated order, while make the other choice possible via user
> > space overridden.
> >
> > I don't know how to implement this via your proposed user space
> > interface. How about the following user space interface?
> >
> > 1. Add a file "demotion_order_override" in
> > /sys/devices/system/node/
> >
> > 2. When read, "1" is output if the demotion order of the system has been
> > overridden; "0" is output if not.
> >
> > 3. When write "1", the demotion order of the system will become the
> > overridden mode. When write "0", the demotion order of the system will
> > become the automatic mode and the demotion order will be re-generated.
> >
> > 4. Add a file "demotion_targets" for each node in
> > /sys/devices/system/node/nodeX/
> >
> > 5. When read, the demotion targets of nodeX will be output.
> >
> > 6. When write a node list to the file, the demotion targets of nodeX
> > will be set to the written nodes. And the demotion order of the system
> > will become the overridden mode.
>
> TBH I don't think having override demotion targets in userspace is
> quite useful in real life for now (it might become useful in the
> future, I can't tell). Imagine you manage hundred thousands of
> machines, which may come from different vendors, have different
> generations of hardware, have different versions of firmware, it would
> be a nightmare for the users to configure the demotion targets
> properly. So it would be great to have the kernel properly configure
> it *without* intervening from the users.
>
> So we should pick up a proper default policy and stick with that
> policy unless it doesn't work well for the most workloads. I do
> understand it is hard to make everyone happy. My proposal is having
> every node in the fast tier has a demotion target (at least one) if
> the slow tier exists sounds like a reasonable default policy. I think
> this is also the current implementation.
>
This is reasonable. I agree that with a decent default policy, the
overriding of per-node demotion targets can be deferred. The most
important problem here is that we should allow the configurations
where memory-only nodes are not used as demotion targets, which this
patch set has already addressed.
> >
> > To reduce the complexity, the demotion order of the system is either in
> > overridden mode or automatic mode. When converting from the automatic
> > mode to the overridden mode, the existing demotion targets of all nodes
> > will be retained before being changed. When converting from overridden
> > mode to automatic mode, the demotion order of the system will be re-
> > generated automatically.
> >
> > In overridden mode, the demotion targets of the hot-added and hot-
> > removed node will be set to empty. And the hot-removed node will be
> > removed from the demotion targets of any node.
> >
> > This is an extention of the interface used in the following patch,
> >
> > https://lore.kernel.org/lkml/20191016221149.74AE222C@viggo.jf.intel.com/
> >
> > What do you think about this?
> >
> > > node state N_DEMOTION_TARGETS is also set from the dax kmem
> > > driver, certain type of memory which registers through dax kmem
> > > (e.g. HBM) may not be the right choices for demotion so in future
> > > they should be distinguished based on certain attributes and dax
> > > kmem driver should avoid setting them as N_DEMOTION_TARGETS,
> > > however current implementation also doesn't distinguish any
> > > such memory and it considers all N_MEMORY as demotion targets
> > > so this patch series doesn't modify the current behavior.
> > >
> >
> > Best Regards,
> > Huang, Ying
> >
> > [snip]
> >
Powered by blists - more mailing lists