lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68333b21a58604f3fd0e660f1a39921ae22849d8.camel@intel.com>
Date:   Wed, 11 May 2022 15:49:34 +0800
From:   "ying.huang@...el.com" <ying.huang@...el.com>
To:     Wei Xu <weixugc@...gle.com>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
Cc:     Alistair Popple <apopple@...dia.com>,
        Yang Shi <shy828301@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Linux MM <linux-mm@...ck.org>,
        Greg Thelen <gthelen@...gle.com>,
        Jagdish Gediya <jvgediya@...ux.ibm.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        Michal Hocko <mhocko@...nel.org>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Brice Goglin <brice.goglin@...il.com>,
        Feng Tang <feng.tang@...el.com>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: RFC: Memory Tiering Kernel Interfaces

On Tue, 2022-05-10 at 22:30 -0700, Wei Xu wrote:
> On Tue, May 10, 2022 at 4:38 AM Aneesh Kumar K.V
> <aneesh.kumar@...ux.ibm.com> wrote:
> > 
> > Alistair Popple <apopple@...dia.com> writes:
> > 
> > > Wei Xu <weixugc@...gle.com> writes:
> > > 
> > > > On Thu, May 5, 2022 at 5:19 PM Alistair Popple <apopple@...dia.com> wrote:
> > > > > 
> > > > > Wei Xu <weixugc@...gle.com> writes:
> > > > > 
> > > > > [...]
> > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Tiering Hierarchy Initialization
> > > > > > > > `=============================='
> > > > > > > > 
> > > > > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY).
> > > > > > > > 
> > > > > > > > A device driver can remove its memory nodes from the top tier, e.g.
> > > > > > > > a dax driver can remove PMEM nodes from the top tier.
> > > > > > > 
> > > > > > > With the topology built by firmware we should not need this.
> > > > > 
> > > > > I agree that in an ideal world the hierarchy should be built by firmware based
> > > > > on something like the HMAT. But I also think being able to override this will be
> > > > > useful in getting there. Therefore a way of overriding the generated hierarchy
> > > > > would be good, either via sysfs or kernel boot parameter if we don't want to
> > > > > commit to a particular user interface now.
> > > > > 
> > > > > However I'm less sure letting device-drivers override this is a good idea. How
> > > > > for example would a GPU driver make sure it's node is in the top tier? By moving
> > > > > every node that the driver does not know about out of N_TOPTIER_MEMORY? That
> > > > > could get messy if say there were two drivers both of which wanted their node to
> > > > > be in the top tier.
> > > > 
> > > > The suggestion is to allow a device driver to opt out its memory
> > > > devices from the top-tier, not the other way around.
> > > 
> > > So how would demotion work in the case of accelerators then? In that
> > > case we would want GPU memory to demote to DRAM, but that won't happen
> > > if both DRAM and GPU memory are in N_TOPTIER_MEMORY and it seems the
> > > only override available with this proposal would move GPU memory into a
> > > lower tier, which is the opposite of what's needed there.
> > 
> > How about we do 3 tiers now. dax kmem devices can be registered to
> > tier 3. By default all numa nodes can be registered at tier 2 and HBM or
> > GPU can be enabled to register at tier 1. ?
> 
> This makes sense.  I will send an updated RFC based on the discussions so far.

Are these tier number fixed?  If so, it appears strange that the
smallest tier number is 0 on some machines, but 1 on some other
machines.

Best Regards,
Huang, Ying


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ