lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Oct 2022 13:34:46 +0530
From:   Bharata B Rao <bharata@....com>
To:     "Huang, Ying" <ying.huang@...el.com>,
        Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alistair Popple <apopple@...dia.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Hesham Almatary <hesham.almatary@...wei.com>,
        Jagdish Gediya <jvgediya.oss@...il.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Michal Hocko <mhocko@...nel.org>,
        Tim Chen <tim.c.chen@...el.com>, Wei Xu <weixugc@...gle.com>,
        Yang Shi <shy828301@...il.com>
Subject: Re: [RFC] memory tiering: use small chunk size and more tiers

On 10/28/2022 11:16 AM, Huang, Ying wrote:
> If my understanding were correct, you think the latency / bandwidth of
> these NUMA nodes will near each other, but may be different.
> 
> Even if the latency / bandwidth of these NUMA nodes isn't exactly same,
> we should deal with that in memory types instead of memory tiers.
> There's only one abstract distance for each memory type.
> 
> So, I still believe we will not have many memory tiers with my proposal.
> 
> I don't care too much about the exact number, but want to discuss some
> general design choice,
> 
> a) Avoid to group multiple memory types into one memory tier by default
>    at most times.

Do you expect the abstract distances of two different types to be
close enough in real life (like you showed in your example with
CXL - 5000 and PMEM - 5100) that they will get assigned into same tier
most times?

Are you foreseeing that abstract distance that get mapped by sources
like HMAT would run into this issue?

Regards,
Bharata.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ