lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Mar 2022 22:06:52 +0530
From:   Jagdish Gediya <jvgediya@...ux.ibm.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, aneesh.kumar@...ux.ibm.com,
        baolin.wang@...ux.alibaba.com, dave.hansen@...ux.intel.com
Subject: Re: [PATCH] mm: migrate: set demotion targets differently

Hi Huang,

On Wed, Mar 30, 2022 at 02:46:51PM +0800, Huang, Ying wrote:
> Hi, Jagdish,
> 
> Jagdish Gediya <jvgediya@...ux.ibm.com> writes:
> 
> > The current implementation to identify the demotion
> > targets limits some of the opportunities to share
> > the demotion targets between multiple source nodes.
> 
> Yes.  It sounds reasonable to share demotion targets among multiple
> source nodes.
> 
> One question, are example machines below are real hardware now or in
> near future?  Or you just think they are possible?

They are not real hardware right now, they are the future possibilities.

> And, before going into the implementation details, I think that we can
> discuss the perfect demotion order firstly.
> 
> > Implement a logic to identify the loop in the demotion
> > targets such that all the possibilities of demotion can
> > be utilized. Don't share the used targets between all
> > the nodes, instead create the used targets from scratch
> > for each individual node based on for what all node this
> > node is a demotion target. This helps to share the demotion
> > targets without missing any possible way of demotion.
> >
> > e.g. with below NUMA topology, where node 0 & 1 are
> > cpu + dram nodes, node 2 & 3 are equally slower memory
> > only nodes, and node 4 is slowest memory only node,
> >
> > available: 5 nodes (0-4)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus: 2 3
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus:
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node 4 cpus:
> > node 4 size: n MB
> > node 4 free: n MB
> > node distances:
> > node   0   1   2   3   4
> >   0:  10  20  40  40  80
> >   1:  20  10  40  40  80
> >   2:  40  40  10  40  80
> >   3:  40  40  40  10  80
> >   4:  80  80  80  80  10
> >
> > The existing implementation gives below demotion targets,
> >
> > node    demotion_target
> >  0              3, 2
> >  1              4
> >  2              X
> >  3              X
> >  4		X
> >
> > With this patch applied, below are the demotion targets,
> >
> > node    demotion_target
> >  0              3, 2
> >  1              3, 2
> >  2              3
> >  3              4
> >  4		X
> 
> For such machine, I think the perfect demotion order is,
> 
> node    demotion_target
>  0              2, 3
>  1              2, 3
>  2              4
>  3              4
>  4              X

Current implementation works based on the best distance algorithm,
this patch doesn't change it either, so based on the distance, the
demotion list is what I have mentioned. I understand 4 is better
target for 2 but as per the mentioned numa distances and current
algorithm, it doesn't get configured like that in the kernel.

> > e.g. with below NUMA topology, where node 0, 1 & 2 are
> > cpu + dram nodes and node 3 is slow memory node,
> >
> > available: 4 nodes (0-3)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus: 2 3
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 4 5
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node distances:
> > node   0   1   2   3
> >   0:  10  20  20  40
> >   1:  20  10  20  40
> >   2:  20  20  10  40
> >   3:  40  40  40  10
> >
> > The existing implementation gives below demotion targets,
> >
> > node    demotion_target
> >  0              3
> >  1              X
> >  2              X
> >  3              X
> >
> > With this patch applied, below are the demotion targets,
> >
> > node    demotion_target
> >  0              3
> >  1              3
> >  2              3
> >  3              X
> 
> I think this is perfect already.
> 
> > with below NUMA topology, where node 0 & 2 are cpu + dram
> > nodes and node 1 & 3 are slow memory nodes,
> >
> > available: 4 nodes (0-3)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus:
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 2 3
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node distances:
> > node   0   1   2   3
> >   0:  10  40  20  80
> >   1:  40  10  80  80
> >   2:  20  80  10  40
> >   3:  80  80  40  10
> >
> > The existing implementation gives below demotion targets,
> >
> > node    demotion_target
> >  0              3
> >  1              X
> >  2              3
> >  3              X
> 
> Should be as below as you said in another email of the thread.
> 
> node    demotion_target
>  0              1
>  1              X
>  2              3
>  3              X
> 
> > With this patch applied, below are the demotion targets,
> >
> > node    demotion_target
> >  0              1
> >  1              3
> >  2              3
> >  3              X
> 
> The original demotion order looks better for me.  1 and 3 are at the
> same level from the perspective of the whole system.
> 
> Another example, node 0 & 2 are cpu + dram nodes and node 1 are slow
> memory node near node 0,
> 
> available: 3 nodes (0-2)
> node 0 cpus: 0 1
> node 0 size: n MB
> node 0 free: n MB
> node 1 cpus:
> node 1 size: n MB
> node 1 free: n MB
> node 2 cpus: 2 3
> node 2 size: n MB
> node 2 free: n MB
> node distances:
> node   0   1   2
>   0:  10  40  20
>   1:  40  10  80
>   2:  20  80  10
> 
> 
> Demotion order 1:
> 
> node    demotion_target
>  0              1
>  1              X
>  2              X
> 
> Demotion order 2:
> 
> node    demotion_target
>  0              1
>  1              X
>  2              1
> 
> Demotion order 2 looks better.  But I think that demotion order 1 makes
> some sense too (like node reclaim mode).
> 
> It seems that,
> 
> If a demotion target has same distance to several current demotion
> sources, the demotion target should be shared among the demotion
> sources.

Yes, and that is where this patch is useful.

> And as Dave pointed out, we may eventually need a mechanism to override
> the default demotion order generated automatically.  So we can just use
> some simple mechanism that makes sense in most cases in kernel
> automatically.  And leave the best demotion for users to some
> customization mechanism.

Yes, We need a mechanism to override the default demotion list prepared
by the current implementation. PowerVM can have a cpu less dram node
as well, which infact are not the right target for demotion because
it is the fast memory. We need to distinguish between memory tiers
so that slow memory can be utilized for the demotion even when there
are fast memory only numa nodes.

I think we may see implementations in future to override the default
behavior e.g. when systems have both fast only and slow only memory
nodes and in that case it will make sense to demote to slow memory
only node even if it is far, but this patch is to put the current
implementation to its intended design 'best distance based demotion
targets'.

> > As it can be seen above, node 3 can be demotion target for node
> > 1 but existing implementation doesn't configure it that way. It
> > is better to move pages from node 1 to node 3 instead of moving
> > it from node 1 to swap.
> >
> > Signed-off-by: Jagdish Gediya <jvgediya@...ux.ibm.com>
> > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
> 
> Best Regards,
> Huang, Ying
> 
> [snip]
> 
Best Regards,
Jagdish

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ