[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAPL-u_40Zxe2AtYbOedDXPBfDPDCqi-OS=yYXf2FcZQS-6v4g@mail.gmail.com>
Date: Wed, 11 May 2022 19:39:56 -0700
From: Wei Xu <weixugc@...gle.com>
To: "ying.huang@...el.com" <ying.huang@...el.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Alistair Popple <apopple@...dia.com>,
Yang Shi <shy828301@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>,
Linux MM <linux-mm@...ck.org>,
Greg Thelen <gthelen@...gle.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Michal Hocko <mhocko@...nel.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Brice Goglin <brice.goglin@...il.com>,
Feng Tang <feng.tang@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: RFC: Memory Tiering Kernel Interfaces
On Wed, May 11, 2022 at 6:42 PM ying.huang@...el.com
<ying.huang@...el.com> wrote:
>
> On Wed, 2022-05-11 at 10:07 -0700, Wei Xu wrote:
> > On Wed, May 11, 2022 at 12:49 AM ying.huang@...el.com
> > <ying.huang@...el.com> wrote:
> > >
> > > On Tue, 2022-05-10 at 22:30 -0700, Wei Xu wrote:
> > > > On Tue, May 10, 2022 at 4:38 AM Aneesh Kumar K.V
> > > > <aneesh.kumar@...ux.ibm.com> wrote:
> > > > >
> > > > > Alistair Popple <apopple@...dia.com> writes:
> > > > >
> > > > > > Wei Xu <weixugc@...gle.com> writes:
> > > > > >
> > > > > > > On Thu, May 5, 2022 at 5:19 PM Alistair Popple <apopple@...dia.com> wrote:
> > > > > > > >
> > > > > > > > Wei Xu <weixugc@...gle.com> writes:
> > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Tiering Hierarchy Initialization
> > > > > > > > > > > `=============================='
> > > > > > > > > > >
> > > > > > > > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY).
> > > > > > > > > > >
> > > > > > > > > > > A device driver can remove its memory nodes from the top tier, e.g.
> > > > > > > > > > > a dax driver can remove PMEM nodes from the top tier.
> > > > > > > > > >
> > > > > > > > > > With the topology built by firmware we should not need this.
> > > > > > > >
> > > > > > > > I agree that in an ideal world the hierarchy should be built by firmware based
> > > > > > > > on something like the HMAT. But I also think being able to override this will be
> > > > > > > > useful in getting there. Therefore a way of overriding the generated hierarchy
> > > > > > > > would be good, either via sysfs or kernel boot parameter if we don't want to
> > > > > > > > commit to a particular user interface now.
> > > > > > > >
> > > > > > > > However I'm less sure letting device-drivers override this is a good idea. How
> > > > > > > > for example would a GPU driver make sure it's node is in the top tier? By moving
> > > > > > > > every node that the driver does not know about out of N_TOPTIER_MEMORY? That
> > > > > > > > could get messy if say there were two drivers both of which wanted their node to
> > > > > > > > be in the top tier.
> > > > > > >
> > > > > > > The suggestion is to allow a device driver to opt out its memory
> > > > > > > devices from the top-tier, not the other way around.
> > > > > >
> > > > > > So how would demotion work in the case of accelerators then? In that
> > > > > > case we would want GPU memory to demote to DRAM, but that won't happen
> > > > > > if both DRAM and GPU memory are in N_TOPTIER_MEMORY and it seems the
> > > > > > only override available with this proposal would move GPU memory into a
> > > > > > lower tier, which is the opposite of what's needed there.
> > > > >
> > > > > How about we do 3 tiers now. dax kmem devices can be registered to
> > > > > tier 3. By default all numa nodes can be registered at tier 2 and HBM or
> > > > > GPU can be enabled to register at tier 1. ?
> > > >
> > > > This makes sense. I will send an updated RFC based on the discussions so far.
> > >
> > > Are these tier number fixed? If so, it appears strange that the
> > > smallest tier number is 0 on some machines, but 1 on some other
> > > machines.
> >
> > When the kernel is configured to allow 3 tiers, we can always show all
> > the 3 tiers. It is just that some tiers (e.g. tier 0) may be empty on
> > some machines.
>
> I still think that it's better to have no empty tiers for auto-generated
> memory tiers by kernel. Yes, the tier number will be not absolutely
> stable, but that only happens during system bootup in practice, so it's
> not a big issue IMHO.
It should not be hard to hide empty tiers (e.g. tier-0) if we prefer.
But even if tier-0 is empty, we should still keep this tier in the
kernel and not move DRAM nodes into this tier. One reason is that a
HBM node might be hot-added into tier-0 at a later time.
> And, I still think it's better to make only N-1 tiers writable for
> totally N tiers (or even readable). Considering "tier0" is written, how
> to deal with nodes in "tier0" before but not after writing? One
> possible way is to put them into "tierN". And during a user customize
> the tiers, the union of "N tiers" may be not complete.
The sysfs interfaces that I have in mind now are:
* /sys/devices/system/memtier/memtierN/nodelist (N=0, 1, 2)
This is read-only to list the memory nodes for a specific tier.
* /sys/devices/system/node/nodeN/memtier. (N=0, 1, ...,)
This is a read-write interface. When written, the kernel moves the
node into the user-specified tier. No other nodes are affected.
This interface should be able to avoid the above issue.
> > BTW, the userspace should not assume a specific meaning of a
> > particular tier id because it can change depending on the number of
> > tiers that the kernel is configured with. For example, the userspace
> > should not assume that tier-2 always means PMEM nodes. In a system
> > with 4 tiers, PMEM nodes may be in tier-3, not tier-2.
>
> Yes. This sounds good.
>
> Best Regards,
> Huang, Ying
>
Powered by blists - more mailing lists