lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN6PR12MB2765859076BFE5B667A0C4719BCC9@SN6PR12MB2765.namprd12.prod.outlook.com>
Date:   Tue, 31 Aug 2021 15:26:45 +0000
From:   "Ramakrishnan, Krupa" <Krupa.Ramakrishnan@....com>
To:     Anshuman Khandual <anshuman.khandual@....com>,
        "Rao, Bharata Bhasker" <bharata@....com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "kamezawa.hiroyu@...fujitsu.com" <kamezawa.hiroyu@...fujitsu.com>,
        "lee.schermerhorn@...com" <lee.schermerhorn@...com>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "Srinivasan, Sadagopan" <Sadagopan.Srinivasan@....com>
Subject: RE: [FIX PATCH 2/2] mm/page_alloc: Use accumulated load when building
 node fallback list

[AMD Official Use Only]

The bandwidth is limited by underutilization of cross socket links and not the  latency. Hotspotting on  one node will not engage all  hardware resources based on our routing protocol which results in the lower bandwidth. Distributing equally across nodes 0 and 1 will yield the best results as it stresses the full system capabilities.

Thanks
Krupa Ramakrishnan

-----Original Message-----
From: Anshuman Khandual <anshuman.khandual@....com> 
Sent: 31 August, 2021 4:58
To: Rao, Bharata Bhasker <bharata@....com>; linux-mm@...ck.org; linux-kernel@...r.kernel.org
Cc: akpm@...ux-foundation.org; kamezawa.hiroyu@...fujitsu.com; lee.schermerhorn@...com; mgorman@...e.de; Ramakrishnan, Krupa <Krupa.Ramakrishnan@....com>; Srinivasan, Sadagopan <Sadagopan.Srinivasan@....com>
Subject: Re: [FIX PATCH 2/2] mm/page_alloc: Use accumulated load when building node fallback list

[CAUTION: External Email]

On 8/30/21 5:46 PM, Bharata B Rao wrote:
> As an example, consider a 4 node system with the following distance 
> matrix.
>
> Node 0  1  2  3
> ----------------
> 0    10 12 32 32
> 1    12 10 32 32
> 2    32 32 10 12
> 3    32 32 12 10
>
> For this case, the node fallback list gets built like this:
>
> Node  Fallback list
> ---------------------
> 0     0 1 2 3
> 1     1 0 3 2
> 2     2 3 0 1
> 3     3 2 0 1 <-- Unexpected fallback order
>
> In the fallback list for nodes 2 and 3, the nodes 0 and 1 appear in 
> the same order which results in more allocations getting satisfied 
> from node 0 compared to node 1.
>
> The effect of this on remote memory bandwidth as seen by stream 
> benchmark is shown below:
>
> Case 1: Bandwidth from cores on nodes 2 & 3 to memory on nodes 0 & 1
>       (numactl -m 0,1 ./stream_lowOverhead ... --cores <from 2, 3>) 
> Case 2: Bandwidth from cores on nodes 0 & 1 to memory on nodes 2 & 3
>       (numactl -m 2,3 ./stream_lowOverhead ... --cores <from 0, 1>)
>
> ----------------------------------------
>               BANDWIDTH (MB/s)
>     TEST      Case 1          Case 2
> ----------------------------------------
>     COPY      57479.6         110791.8
>    SCALE      55372.9         105685.9
>      ADD      50460.6         96734.2
>   TRIADD      50397.6         97119.1
> ----------------------------------------
>
> The bandwidth drop in Case 1 occurs because most of the allocations 
> get satisfied by node 0 as it appears first in the fallback order for 
> both nodes 2 and 3.

I am wondering what causes this performance drop here ? Would not the memory access latency be similar between {2, 3} --->  { 0 } and {2, 3} --->  { 1 }, given both these nodes {0, 1} have same distance from {2, 3} i.e 32 from the above distance matrix. Even if the preferred node order changes from { 0 } to { 1 } for the accessing node { 3 }, it should not change the latency as such.

Is the performance drop here, is caused by excessive allocation on node { 0 } resulting from page allocation latency instead.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ