[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAPL-u-O5nH-7ira3htQ9pUdb3u5oCRpmcxafL9Abo0kWACXaw@mail.gmail.com>
Date: Mon, 9 May 2022 21:43:27 -0700
From: Wei Xu <weixugc@...gle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Alistair Popple <apopple@...dia.com>,
Davidlohr Bueso <dave@...olabs.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Huang Ying <ying.huang@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Yang Shi <shy828301@...il.com>, Linux MM <linux-mm@...ck.org>,
Greg Thelen <gthelen@...gle.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...nel.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Brice Goglin <brice.goglin@...il.com>,
Feng Tang <feng.tang@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>
Subject: Re: RFC: Memory Tiering Kernel Interfaces
On Thu, May 5, 2022 at 7:24 AM Dave Hansen <dave.hansen@...el.com> wrote:
>
> On 5/4/22 23:35, Wei Xu wrote:
> > On Wed, May 4, 2022 at 10:02 AM Dave Hansen <dave.hansen@...el.com> wrote:
> >> That means a lot of page table and EPT walks to map those linear
> >> addresses back to physical. That adds to the inefficiency.
> >
> > That's true if the tracking is purely based on physical pages. For
> > hot page tracking from PEBS, we can consider tracking in
> > virtual/linear addresses. We don't need to maintain the history for
> > all linear page addresses nor for an indefinite amount of time. After
> > all, we just need to identify pages accessed frequently recently and
> > promote them.
>
> Except that you don't want to promote on *every* access. That might
> lead to too much churn.
Certainly. We should use the PMU events to help build the page
heatmap in software and select the hottest pages to promote
accordingly.
> You're also assuming that all accesses to a physical page are via a
> single linear address, which ignores shared memory mapped at different
> linear addresses. Our (maybe wrong) assumption has been that shared
> memory is important enough to manage that it can't be ignored.
Shared memory is important. Special handling will be needed to better
support such pages for linear address based hot page tracking.
> >> In the end, you get big PEBS buffers with lots of irrelevant data that
> >> needs significant post-processing to make sense of it.
> >
> > I am curious about what are "lots of irrelevant data" if PEBS data is
> > filtered on data sources (e.g. DRAM vs PMEM) by hardware. If we need
> > to have different policies for the pages from the same data source,
> > then I agree that the software has to do a lot of filtering work.
>
> Perhaps "irrelevant" was a bad term to use. I meant that you can't just
> take the PEBS data and act directly on it. It has to be post-processed
> and you will see things in there like lots of adjacent accesses to a
> page. Those additional accesses can be interesting but at some point
> you have all the weight you need to promote the page and the _rest_ are
> irrelevant.
That's right. The software has to do the post-processing work to build
the page heatmap with what the existing hardware can provide.
> >> The folks at Intel that tried this really struggled to take this mess and turn it into a successful hot-page tracking.
> >>
> >> Maybe someone else will find a better way to do it, but we tried and
> >> gave up.
> >
> > It might be challenging to use PEBS as the only and universal hot page
> > tracking hardware mechanism. For example, there are challenges to use
> > PEBS to sample KVM guest accesses from the host.
>
> Yep, agreed. This aspect of the hardware is very painful at the moment.
>
> > On the other hand, PEBS with hardware-based data source filtering can
> > be a useful mechanism to improve hot page tracking in conjunction
> > with other techniques.
>
> Rather than "can", I'd say: "might". Backing up to what I said originally:
>
> > So, in practice, these events (PEBS) weren't very useful
> > for driving memory tiering.
>
> By "driving" I really meant solely driving. Like, can PEBS be used as
> the one and only mechanism? We couldn't make it work. But, the
> hardware _is_ sitting there mostly unused. It might be great to augment
> what is there, and nobody should be discouraged from looking at it again.
I think we are on the same page.
Powered by blists - more mailing lists