lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Apr 2022 08:56:37 +0800
From:   "ying.huang@...el.com" <ying.huang@...el.com>
To:     Wei Xu <weixugc@...gle.com>,
        Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
Cc:     Jagdish Gediya <jvgediya@...ux.ibm.com>,
        Yang Shi <shy828301@...il.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Greg Thelen <gthelen@...gle.com>,
        MichalHocko <mhocko@...nel.org>,
        Brice Goglin <brice.goglin@...il.com>, feng.tang@...el.com
Subject: Re: [PATCH v2 0/5] mm: demotion: Introduce new node state
 N_DEMOTION_TARGETS

On Wed, 2022-04-27 at 11:27 -0700, Wei Xu wrote:
> On Tue, Apr 26, 2022 at 10:06 PM Aneesh Kumar K V
> <aneesh.kumar@...ux.ibm.com> wrote:
> > 
> > On 4/25/22 10:26 PM, Wei Xu wrote:
> > > On Sat, Apr 23, 2022 at 8:02 PM ying.huang@...el.com
> > > <ying.huang@...el.com> wrote:
> > > > 
> > 
> > ....
> > 
> > > > 2. For machines with PMEM installed in only 1 of 2 sockets, for example,
> > > > 
> > > > Node 0 & 2 are cpu + dram nodes and node 1 are slow
> > > > memory node near node 0,
> > > > 
> > > > available: 3 nodes (0-2)
> > > > node 0 cpus: 0 1
> > > > node 0 size: n MB
> > > > node 0 free: n MB
> > > > node 1 cpus:
> > > > node 1 size: n MB
> > > > node 1 free: n MB
> > > > node 2 cpus: 2 3
> > > > node 2 size: n MB
> > > > node 2 free: n MB
> > > > node distances:
> > > > node   0   1   2
> > > >    0:  10  40  20
> > > >    1:  40  10  80
> > > >    2:  20  80  10
> > > > 
> > > > We have 2 choices,
> > > > 
> > > > a)
> > > > node    demotion targets
> > > > 0       1
> > > > 2       1
> > > > 
> > > > b)
> > > > node    demotion targets
> > > > 0       1
> > > > 2       X
> > > > 
> > > > a) is good to take advantage of PMEM.  b) is good to reduce cross-socket
> > > > traffic.  Both are OK as defualt configuration.  But some users may
> > > > prefer the other one.  So we need a user space ABI to override the
> > > > default configuration.
> > > 
> > > I think 2(a) should be the system-wide configuration and 2(b) can be
> > > achieved with NUMA mempolicy (which needs to be added to demotion).
> > > 
> > > In general, we can view the demotion order in a way similar to
> > > allocation fallback order (after all, if we don't demote or demotion
> > > lags behind, the allocations will go to these demotion target nodes
> > > according to the allocation fallback order anyway).  If we initialize
> > > the demotion order in that way (i.e. every node can demote to any node
> > > in the next tier, and the priority of the target nodes is sorted for
> > > each source node), we don't need per-node demotion order override from
> > > the userspace.  What we need is to specify what nodes should be in
> > > each tier and support NUMA mempolicy in demotion.
> > > 
> > 
> > I have been wondering how we would handle this. For ex: If an
> > application has specified an MPOL_BIND policy and restricted the
> > allocation to be from Node0 and Node1, should we demote pages allocated
> > by that application
> > to Node10? The other alternative for that demotion is swapping. So from
> > the page point of view, we either demote to a slow memory or pageout to
> > swap. But then if we demote we are also breaking the MPOL_BIND rule.
> 
> IMHO, the MPOL_BIND policy should be respected and demotion should be
> skipped in such cases.  Such MPOL_BIND policies can be an important
> tool for applications to override and control their memory placement
> when transparent memory tiering is enabled.  If the application
> doesn't want swapping, there are other ways to achieve that (e.g.
> mlock, disabling swap globally, setting memcg parameters, etc).
> 
>
> > The above says we would need some kind of mem policy interaction, but
> > what I am not sure about is how to find the memory policy in the
> > demotion path.
> 
> This is indeed an important and challenging problem.  One possible
> approach is to retrieve the allowed demotion nodemask from
> page_referenced() similar to vm_flags.

This works for mempolicy in struct vm_area_struct, but not for that in
struct task_struct.  Mutiple threads in a process may have different
mempolicy.

Best Regards,
Huang, Ying

> > 
> > > Cross-socket demotion should not be too big a problem in practice
> > > because we can optimize the code to do the demotion from the local CPU
> > > node (i.e. local writes to the target node and remote read from the
> > > source node).  The bigger issue is cross-socket memory access onto the
> > > demoted pages from the applications, which is why NUMA mempolicy is
> > > important here.
> > > 
> > > 
> > -aneesh


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ