lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e1bf6346-fd93-13ee-0b38-c1d956df0e99@linux.ibm.com>
Date:   Tue, 10 May 2022 17:40:10 +0530
From:   Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
To:     Hesham Almatary <hesham.almatary@...wei.com>,
        Yang Shi <shy828301@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Huang Ying <ying.huang@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Linux MM <linux-mm@...ck.org>,
        Greg Thelen <gthelen@...gle.com>,
        Jagdish Gediya <jvgediya@...ux.ibm.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Alistair Popple <apopple@...dia.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Michal Hocko <mhocko@...nel.org>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Brice Goglin <brice.goglin@...il.com>,
        Feng Tang <feng.tang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Wei Xu <weixugc@...gle.com>
Subject: Re: RFC: Memory Tiering Kernel Interfaces

On 5/10/22 3:29 PM, Hesham Almatary wrote:
> Hello Yang,
> 
> On 5/10/2022 4:24 AM, Yang Shi wrote:
>> On Mon, May 9, 2022 at 7:32 AM Hesham Almatary
>> <hesham.almatary@...wei.com> wrote:


...

>>>
>>> node 0 has a CPU and DDR memory in tier 0, node 1 has GPU and DDR memory
>>> in tier 0,
>>> node 2 has NVMM memory in tier 1, node 3 has some sort of bigger memory
>>> (could be a bigger DDR or something) in tier 2. The distances are as
>>> follows:
>>>
>>> --------------          --------------
>>> |   Node 0   |          |   Node 1   |
>>> |  -------   |          |  -------   |
>>> | |  DDR  |  |          | |  DDR  |  |
>>> |  -------   |          |  -------   |
>>> |            |          |            |
>>> --------------          --------------
>>>          | 20               | 120    |
>>>          v                  v        |
>>> ----------------------------       |
>>> | Node 2     PMEM          |       | 100
>>> ----------------------------       |
>>>          | 100                       |
>>>          v                           v
>>> --------------------------------------
>>> | Node 3    Large mem                |
>>> --------------------------------------
>>>
>>> node distances:
>>> node   0    1    2    3
>>>      0  10   20   20  120
>>>      1  20   10  120  100
>>>      2  20  120   10  100
>>>      3  120 100  100   10
>>>
>>> /sys/devices/system/node/memory_tiers
>>> 0-1
>>> 2
>>> 3
>>>
>>> N_TOPTIER_MEMORY: 0-1
>>>
>>>
>>> In this case, we want to be able to "skip" the demotion path from Node 1
>>> to Node 2,
>>>
>>> and make demotion go directely to Node 3 as it is closer, distance wise.
>>> How can
>>>
>>> we accommodate this scenario (or at least not rule it out as future
>>> work) with the
>>>
>>> current RFC?
>> If I remember correctly NUMA distance is hardcoded in SLIT by the
>> firmware, it is supposed to reflect the latency. So I suppose it is
>> the firmware's responsibility to have correct information. And the RFC
>> assumes higher tier memory has better performance than lower tier
>> memory (latency, bandwidth, throughput, etc), so it sounds like a
>> buggy firmware to have lower tier memory with shorter distance than
>> higher tier memory IMHO.
> 
> You are correct if you're assuming the topology is all hierarchically
> 
> symmetric, but unfortuantely, in real hardware (e.g., my example above)
> 
> it is not. The distance/latency between two nodes in the same tier
> 
> and a third node, is different. The firmware still provides the correct
> 
> latency, but putting a node in a tier is up to the kernel/user, and
> 
> is relative: e.g., Node 3 could belong to tier 1 from Node 1's
> 
> perspective, but to tier 2 from Node 0's.
> 
> 
> A more detailed example (building on my previous one) is when having
> 
> the GPU connected to a switch:
> 
> ----------------------------
> | Node 2     PMEM          |
> ----------------------------
>        ^
>        |
> --------------          --------------
> |   Node 0   |          |   Node 1   |
> |  -------   |          |  -------   |
> | |  DDR  |  |          | |  DDR  |  |
> |  -------   |          |  -------   |
> |    CPU     |          |    GPU     |
> --------------          --------------
>         |                  |
>         v                  v
> ----------------------------
> |         Switch           |
> ----------------------------
>         |
>         v
> --------------------------------------
> | Node 3    Large mem                |
> --------------------------------------
> 
> Here, demoting from Node 1 to Node 3 directly would be faster as
> 
> it only has to go through one hub, compared to demoting from Node 1
> 
> to Node 2, where it goes through two hubs. I hope that example
> 
> clarifies things a little bit.
> 

Alistair mentioned that we want to consider GPU memory to be expensive 
and want to demote from GPU to regular DRAM. In that case for the above 
case we should end up with


tier 0 - > Node3
tier 1 ->  Node0, Node1
tier 2 ->  Node2

Hence

  node 0: allowed=2
  node 1: allowed=2
  node 2: allowed = empty
  node 3: allowed = 0-1 , based on fallback order 1, 0

-aneesh


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ