[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55a344fa-4325-e82d-eeaa-1a77611ff513@amd.com>
Date: Mon, 6 Mar 2023 21:00:47 +0530
From: Bharata B Rao <bharata@....com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, mgorman@...e.de,
peterz@...radead.org, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org,
akpm@...ux-foundation.org, luto@...nel.org, tglx@...utronix.de,
yue.li@...verge.com, Ravikumar.Bangoria@....com
Subject: Re: [RFC PATCH 0/5] Memory access profiler(IBS) driven NUMA balancing
On 03-Mar-23 11:23 AM, Huang, Ying wrote:
>
> What is the memory accessing pattern of the workload? Uniform random or
> something like Gauss distribution?
Multiple iterations of uniform access from beginning to end of the
memory region.
>
> Anyway, it may take some time for the original method to scan enough
> memory space to trigger enough hint page fault. We can check
> numa_pte_updates to check whether enough virtual space has been scanned.
I see that numa_hint_faults is way higher (sometimes close to 5 times)
than numa_pte_updates. This doesn't make sense. Very rarely I do see
saner numbers and when that happens the benchmark score is also much better.
Looks like an issue with the default kernel itself. I will debug this
further and get back.
Regards,
Bharata.
Powered by blists - more mailing lists