lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Apr 2022 16:57:04 +0800
From:   "ying.huang@...el.com" <ying.huang@...el.com>
To:     Jagdish Gediya <jvgediya@...ux.ibm.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, aneesh.kumar@...ux.ibm.com,
        baolin.wang@...ux.alibaba.com, dave.hansen@...ux.intel.com
Subject: Re: [PATCH v2 1/5] mm: demotion: Set demotion list differently

On Thu, 2022-04-14 at 14:18 +0530, Jagdish Gediya wrote:
> On Thu, Apr 14, 2022 at 03:09:42PM +0800, ying.huang@...el.com wrote:
> > On Wed, 2022-04-13 at 14:52 +0530, Jagdish Gediya wrote:
> > > Sharing used_targets between multiple nodes in a single
> > > pass limits some of the opportunities for demotion target
> > > sharing.
> > > 
> > > Don't share the used targets between multiple nodes in a
> > > single pass, instead accumulate all the used targets in
> > > source nodes shared by all pass, and reset 'used_targets'
> > > to source nodes while finding demotion targets for any new
> > > node.
> > > 
> > > This results into some more opportunities to share demotion
> > > targets between multiple source nodes, e.g. with below NUMA
> > > topology, where node 0 & 1 are cpu + dram nodes, node 2 & 3
> > > are equally slower memory only nodes, and node 4 is slowest
> > > memory only node,
> > > 
> > > available: 5 nodes (0-4)
> > > node 0 cpus: 0 1
> > > node 0 size: n MB
> > > node 0 free: n MB
> > > node 1 cpus: 2 3
> > > node 1 size: n MB
> > > node 1 free: n MB
> > > node 2 cpus:
> > > node 2 size: n MB
> > > node 2 free: n MB
> > > node 3 cpus:
> > > node 3 size: n MB
> > > node 3 free: n MB
> > > node 4 cpus:
> > > node 4 size: n MB
> > > node 4 free: n MB
> > > node distances:
> > > node   0   1   2   3   4
> > >   0:  10  20  40  40  80
> > >   1:  20  10  40  40  80
> > >   2:  40  40  10  40  80
> > >   3:  40  40  40  10  80
> > >   4:  80  80  80  80  10
> > > 
> > > The existing implementation gives below demotion targets,
> > > 
> > > node    demotion_target
> > >  0              3, 2
> > >  1              4
> > >  2              X
> > >  3              X
> > >  4              X
> > > 
> > > With this patch applied, below are the demotion targets,
> > > 
> > > node    demotion_target
> > >  0              3, 2
> > >  1              3, 2
> > >  2              4
> > >  3              4
> > >  4              X
> > > 
> > > e.g. with below NUMA topology, where node 0, 1 & 2 are
> > > cpu + dram nodes and node 3 is slow memory node,
> > > 
> > > available: 4 nodes (0-3)
> > > node 0 cpus: 0 1
> > > node 0 size: n MB
> > > node 0 free: n MB
> > > node 1 cpus: 2 3
> > > node 1 size: n MB
> > > node 1 free: n MB
> > > node 2 cpus: 4 5
> > > node 2 size: n MB
> > > node 2 free: n MB
> > > node 3 cpus:
> > > node 3 size: n MB
> > > node 3 free: n MB
> > > node distances:
> > > node   0   1   2   3
> > >   0:  10  20  20  40
> > >   1:  20  10  20  40
> > >   2:  20  20  10  40
> > >   3:  40  40  40  10
> > > 
> > > The existing implementation gives below demotion targets,
> > > 
> > > node    demotion_target
> > >  0              3
> > >  1              X
> > >  2              X
> > >  3              X
> > > 
> > > With this patch applied, below are the demotion targets,
> > > 
> > > node    demotion_target
> > >  0              3
> > >  1              3
> > >  2              3
> > >  3              X
> > > 
> > 
> > With the [PATCH v1], you have describe the demotion order changes for
> > the following system, I guess there's no change with [PATCH v2]?
> 
> Yes, there is no change with v2.
> 
> > With below NUMA topology, where node 0 & 2 are cpu + dram
> > nodes and node 1 & 3 are slow memory nodes,
> > 
> > available: 4 nodes (0-3)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus:
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 2 3
> > node 2 size: n MB
> > node 2 free: n MB
> > node 3 cpus:
> > node 3 size: n MB
> > node 3 free: n MB
> > node distances:
> > node   0   1   2   3
> >   0:  10  40  20  80
> >   1:  40  10  80  80
> >   2:  20  80  10  40
> >   3:  80  80  40  10
> > 
> > And, what is the demotion order for the following system with [PATCH
> > v2]?
> > 
> > Node 0 & 2 are cpu + dram nodes and node 1 are slow
> > memory node near node 0,
> > 
> > available: 3 nodes (0-2)
> > node 0 cpus: 0 1
> > node 0 size: n MB
> > node 0 free: n MB
> > node 1 cpus:
> > node 1 size: n MB
> > node 1 free: n MB
> > node 2 cpus: 2 3
> > node 2 size: n MB
> > node 2 free: n MB
> > node distances:
> > node   0   1   2
> >   0:  10  40  20
> >   1:  40  10  80
> >   2:  20  80  10
> 
> node 1 is demotion target for both node 0 and node 2 with this patch.
> node 1 is demotion target only for node 0 with existing implementation,
> however if node 1 is near to node 2 instead of node 0, still existing
> implementation will give node 1 as demotion target only for node 0 which
> is not the correct behavior.
> 
> for both the scenario, with this patch applied, node 1 will be demotion
> target for both 0 and 2.
> 

Sounds good!  Thanks.

Acked-by: "Huang, Ying" <ying.huang@...el.com>

> > Best Regards,
> > Huang, Ying
> > 
> > 
> > [snip]
> > 
> > 
> > 
> Best regards,
> Jagdish


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ