[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4808d3fa-bb68-d4c8-681f-0b2770d78041@intel.com>
Date: Wed, 8 Feb 2023 10:12:45 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>,
Bharata B Rao <bharata@....com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, mgorman@...e.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, akpm@...ux-foundation.org, luto@...nel.org,
tglx@...utronix.de, yue.li@...verge.com,
Ravikumar.Bangoria@....com, ying.huang@...el.com
Subject: Re: [RFC PATCH 0/5] Memory access profiler(IBS) driven NUMA balancing
On 2/8/23 10:03, Peter Zijlstra wrote:
>> - Hardware provided access information could be very useful for driving
>> hot page promotion in tiered memory systems. Need to check if this
>> requires different tuning/heuristics apart from what NUMA balancing
>> already does.
> I think Huang Ying looked at that from the Intel POV and I think the
> conclusion was that it doesn't really work out. What you need is
> frequency information, but the PMU doesn't really give you that. You
> need to process a *ton* of PMU data in-kernel.
Yeah, there were two big problems.
First, IIRC, Intel PEBS at the time only gave guest virtual addresses in
the PEBS records. They had to be translated back to host addresses to
be usable. That was extra expensive.
Second, it *did* take a lot of processing to turn raw memory accesses
into actionable frequency data. That meant that we started in a hole
performance-wise and had to make *REALLY* good decisions about page
migration to make up for it.
The performance data here don't look awful, but they don't seem to add
up to a clear win either. I'm having a hard time imagining who would
turn this on and how widely it would get used in practice.
Powered by blists - more mailing lists