lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87lewsc4mh.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Wed, 30 Mar 2022 14:54:14 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc:     Jagdish Gediya <jvgediya@...ux.ibm.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, <akpm@...ux-foundation.org>,
        <aneesh.kumar@...ux.ibm.com>, <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH] mm: migrate: set demotion targets differently

Baolin Wang <baolin.wang@...ux.alibaba.com> writes:

> On 3/29/2022 10:04 PM, Jagdish Gediya wrote:
>> On Tue, Mar 29, 2022 at 08:26:05PM +0800, Baolin Wang wrote:
>> Hi Baolin,
>>> Hi Jagdish,
>>>
>>> On 3/29/2022 7:52 PM, Jagdish Gediya wrote:
>>>> The current implementation to identify the demotion
>>>> targets limits some of the opportunities to share
>>>> the demotion targets between multiple source nodes.
>>>>
>>>> Implement a logic to identify the loop in the demotion
>>>> targets such that all the possibilities of demotion can
>>>> be utilized. Don't share the used targets between all
>>>> the nodes, instead create the used targets from scratch
>>>> for each individual node based on for what all node this
>>>> node is a demotion target. This helps to share the demotion
>>>> targets without missing any possible way of demotion.
>>>>
>>>> e.g. with below NUMA topology, where node 0 & 1 are
>>>> cpu + dram nodes, node 2 & 3 are equally slower memory
>>>> only nodes, and node 4 is slowest memory only node,
>>>>
>>>> available: 5 nodes (0-4)
>>>> node 0 cpus: 0 1
>>>> node 0 size: n MB
>>>> node 0 free: n MB
>>>> node 1 cpus: 2 3
>>>> node 1 size: n MB
>>>> node 1 free: n MB
>>>> node 2 cpus:
>>>> node 2 size: n MB
>>>> node 2 free: n MB
>>>> node 3 cpus:
>>>> node 3 size: n MB
>>>> node 3 free: n MB
>>>> node 4 cpus:
>>>> node 4 size: n MB
>>>> node 4 free: n MB
>>>> node distances:
>>>> node   0   1   2   3   4
>>>>     0:  10  20  40  40  80
>>>>     1:  20  10  40  40  80
>>>>     2:  40  40  10  40  80
>>>>     3:  40  40  40  10  80
>>>>     4:  80  80  80  80  10
>>>>
>>>> The existing implementation gives below demotion targets,
>>>>
>>>> node    demotion_target
>>>>    0              3, 2
>>>>    1              4
>>>>    2              X
>>>>    3              X
>>>>    4		X
>>>>
>>>> With this patch applied, below are the demotion targets,
>>>>
>>>> node    demotion_target
>>>>    0              3, 2
>>>>    1              3, 2
>>>>    2              3
>>>>    3              4
>>>>    4		X
>>>
>>> Node 2 and node 3 both are slow memory and have same distance, why node 2
>>> should demote cold memory to node 3? They should have the same target
>>> demotion node 4, which is the slowest memory node, right?
>>>
>> Current demotion target finding algorithm works based on best distance, as distance between node 2 & 3 is 40 and distance between node 2 & 4 is 80, node 2 demotes to node 3.
>
> If node 2 can demote to node 3, which means node 3's memory is colder
> than node 2, right? The accessing time of node 3 should be larger than 
> node 2, then we can demote colder memory to node 3 from node 2.
>
> But node 2 and node 3 are same memory type and have same distance, the
> accessing time of node 2 and node 3 should be same too, so why add so 
> many page migration between node 2 and node 3? I'm still not sure the
> benefits.
>
> Huang Ying and Dave, how do you think about this demotion targets?

Yes.  I think the demotion target of 2 should be 4, as in my another
email in this thread.  Demoting from 2 to 3 makes no sense.

Best Regards,
Huang, Ying

[snip]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ