lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAPL-u89Jxutu1VH0LnO5VGdMbkLvc2M9eapuwP-y9oG9QSsrA@mail.gmail.com>
Date:   Thu, 21 Apr 2022 11:26:33 -0700
From:   Wei Xu <weixugc@...gle.com>
To:     "ying.huang@...el.com" <ying.huang@...el.com>
Cc:     Yang Shi <shy828301@...il.com>,
        Jagdish Gediya <jvgediya@...ux.ibm.com>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Greg Thelen <gthelen@...gle.com>
Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Thu, Apr 21, 2022 at 12:45 AM ying.huang@...el.com
<ying.huang@...el.com> wrote:
>
> On Thu, 2022-04-21 at 00:29 -0700, Wei Xu wrote:
> > On Thu, Apr 21, 2022 at 12:08 AM ying.huang@...el.com
> > <ying.huang@...el.com> wrote:
> > >
> > > On Wed, 2022-04-20 at 23:49 -0700, Wei Xu wrote:
> > > > On Wed, Apr 20, 2022 at 11:24 PM ying.huang@...el.com
> > > > <ying.huang@...el.com> wrote:
> > > > >
> > > > > On Wed, 2022-04-20 at 22:41 -0700, Wei Xu wrote:
> > > > > > On Wed, Apr 20, 2022 at 8:12 PM Yang Shi <shy828301@...il.com> wrote:
> > > > > > >
> > > > > > > On Thu, Apr 14, 2022 at 12:00 AM ying.huang@...el.com
> > > > > > > <ying.huang@...el.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote:
> > > > > > > > > Current implementation to find the demotion targets works
> > > > > > > > > based on node state N_MEMORY, however some systems may have
> > > > > > > > > dram only memory numa node which are N_MEMORY but not the
> > > > > > > > > right choices as demotion targets.
> > > > > > > > >
> > > > > > > > > This patch series introduces the new node state
> > > > > > > > > N_DEMOTION_TARGETS, which is used to distinguish the nodes which
> > > > > > > > > can be used as demotion targets, node_states[N_DEMOTION_TARGETS]
> > > > > > > > > is used to hold the list of nodes which can be used as demotion
> > > > > > > > > targets, support is also added to set the demotion target
> > > > > > > > > list from user space so that default behavior can be overridden.
> > > > > > > >
> > > > > > > > It appears that your proposed user space interface cannot solve all
> > > > > > > > problems.  For example, for system as follows,
> > > > > > > >
> > > > > > > > Node 0 & 2 are cpu + dram nodes and node 1 are slow memory node near
> > > > > > > > node 0,
> > > > > > > >
> > > > > > > > available: 3 nodes (0-2)
> > > > > > > > node 0 cpus: 0 1
> > > > > > > > node 0 size: n MB
> > > > > > > > node 0 free: n MB
> > > > > > > > node 1 cpus:
> > > > > > > > node 1 size: n MB
> > > > > > > > node 1 free: n MB
> > > > > > > > node 2 cpus: 2 3
> > > > > > > > node 2 size: n MB
> > > > > > > > node 2 free: n MB
> > > > > > > > node distances:
> > > > > > > > node   0   1   2
> > > > > > > >   0:  10  40  20
> > > > > > > >   1:  40  10  80
> > > > > > > >   2:  20  80  10
> > > > > > > >
> > > > > > > > Demotion order 1:
> > > > > > > >
> > > > > > > > node    demotion_target
> > > > > > > >  0              1
> > > > > > > >  1              X
> > > > > > > >  2              X
> > > > > > > >
> > > > > > > > Demotion order 2:
> > > > > > > >
> > > > > > > > node    demotion_target
> > > > > > > >  0              1
> > > > > > > >  1              X
> > > > > > > >  2              1
> > > > > > > >
> > > > > > > > The demotion order 1 is preferred if we want to reduce cross-socket
> > > > > > > > traffic.  While the demotion order 2 is preferred if we want to take
> > > > > > > > full advantage of the slow memory node.  We can take any choice as
> > > > > > > > automatic-generated order, while make the other choice possible via user
> > > > > > > > space overridden.
> > > > > > > >
> > > > > > > > I don't know how to implement this via your proposed user space
> > > > > > > > interface.  How about the following user space interface?
> > > > > > > >
> > > > > > > > 1. Add a file "demotion_order_override" in
> > > > > > > >         /sys/devices/system/node/
> > > > > > > >
> > > > > > > > 2. When read, "1" is output if the demotion order of the system has been
> > > > > > > > overridden; "0" is output if not.
> > > > > > > >
> > > > > > > > 3. When write "1", the demotion order of the system will become the
> > > > > > > > overridden mode.  When write "0", the demotion order of the system will
> > > > > > > > become the automatic mode and the demotion order will be re-generated.
> > > > > > > >
> > > > > > > > 4. Add a file "demotion_targets" for each node in
> > > > > > > >         /sys/devices/system/node/nodeX/
> > > > > > > >
> > > > > > > > 5. When read, the demotion targets of nodeX will be output.
> > > > > > > >
> > > > > > > > 6. When write a node list to the file, the demotion targets of nodeX
> > > > > > > > will be set to the written nodes.  And the demotion order of the system
> > > > > > > > will become the overridden mode.
> > > > > > >
> > > > > > > TBH I don't think having override demotion targets in userspace is
> > > > > > > quite useful in real life for now (it might become useful in the
> > > > > > > future, I can't tell). Imagine you manage hundred thousands of
> > > > > > > machines, which may come from different vendors, have different
> > > > > > > generations of hardware, have different versions of firmware, it would
> > > > > > > be a nightmare for the users to configure the demotion targets
> > > > > > > properly. So it would be great to have the kernel properly configure
> > > > > > > it *without* intervening from the users.
> > > > > > >
> > > > > > > So we should pick up a proper default policy and stick with that
> > > > > > > policy unless it doesn't work well for the most workloads. I do
> > > > > > > understand it is hard to make everyone happy. My proposal is having
> > > > > > > every node in the fast tier has a demotion target (at least one) if
> > > > > > > the slow tier exists sounds like a reasonable default policy. I think
> > > > > > > this is also the current implementation.
> > > > > > >
> > > > > >
> > > > > > This is reasonable.  I agree that with a decent default policy,
> > > > > >
> > > > >
> > > > > I agree that a decent default policy is important.  As that was enhanced
> > > > > in [1/5] of this patchset.
> > > > >
> > > > > > the
> > > > > > overriding of per-node demotion targets can be deferred.  The most
> > > > > > important problem here is that we should allow the configurations
> > > > > > where memory-only nodes are not used as demotion targets, which this
> > > > > > patch set has already addressed.
> > > > >
> > > > > Do you mean the user space interface proposed by [3/5] of this patchset?
> > > >
> > > > Yes.
> > > >
> > > > > IMHO, if we want to add a user space interface, I think that it should
> > > > > be powerful enough to address all existing issues and some potential
> > > > > future issues, so that it can be stable.  I don't think it's a good idea
> > > > > to define a partial user space interface that works only for a specific
> > > > > use case and cannot be extended for other use cases.
> > > >
> > > > I actually think that they can be viewed as two separate problems: one
> > > > is to define which nodes can be used as demotion targets (this patch
> > > > set), and the other is how to initialize the per-node demotion path
> > > > (node_demotion[]).  We don't have to solve both problems at the same
> > > > time.
> > > >
> > > > If we decide to go with a per-node demotion path customization
> > > > interface to indirectly set N_DEMOTION_TARGETS, I'd prefer that there
> > > > is a single global control to turn off all demotion targets (for the
> > > > machines that don't use memory-only nodes for demotion).
> > > >
> > >
> > > There's one already.  In commit 20b51af15e01 ("mm/migrate: add sysfs
> > > interface to enable reclaim migration"), a sysfs interface
> > >
> > >         /sys/kernel/mm/numa/demotion_enabled
> > >
> > > is added to turn off all demotion targets.
> >
> > IIUC, this sysfs interface only turns off demotion-in-reclaim.  It
> > will be even cleaner if we have an easy way to clear node_demotion[]
> > and N_DEMOTION_TARGETS so that the userspace (post-boot agent, not
> > init scripts) can know that the machine doesn't even have memory
> > tiering hardware enabled.
> >
>
> What is the difference?  Now we have no interface to show demotion
> targets of a node.  That is in-kernel only.  What is memory tiering
> hardware?  The Optane PMEM?  Some information for it is available via
> ACPI HMAT table.
>
> Except demotion-in-reclaim, what else do you care about?

There is a difference: one is to indicate the availability of the
memory tiering hardware and the other is to indicate whether
transparent kernel-driven demotion from the reclaim path is activated.
With /sys/devices/system/node/demote_targets or the per-node demotion
target interface, the userspace can figure out the memory tiering
topology abstracted by the kernel.  It is possible to use
application-guided demotion without having to enable reclaim-based
demotion in the kernel.  Logically it is also cleaner to me to
decouple the tiering node representation from the actual demotion
mechanism enablement.

> Best Regards,
> Huang, Ying
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ