[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ttyxb89s.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 07 Mar 2023 10:33:03 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Bharata B Rao <bharata@....com>
Cc: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<mgorman@...e.de>, <peterz@...radead.org>, <mingo@...hat.com>,
<bp@...en8.de>, <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
<akpm@...ux-foundation.org>, <luto@...nel.org>,
<tglx@...utronix.de>, <yue.li@...verge.com>,
<Ravikumar.Bangoria@....com>
Subject: Re: [RFC PATCH 0/5] Memory access profiler(IBS) driven NUMA balancing
Bharata B Rao <bharata@....com> writes:
> On 03-Mar-23 11:23 AM, Huang, Ying wrote:
>>
>> What is the memory accessing pattern of the workload? Uniform random or
>> something like Gauss distribution?
>
> Multiple iterations of uniform access from beginning to end of the
> memory region.
I guess this is sequential accesses instead of random accesses with
uniform distribution.
>>
>> Anyway, it may take some time for the original method to scan enough
>> memory space to trigger enough hint page fault. We can check
>> numa_pte_updates to check whether enough virtual space has been scanned.
>
> I see that numa_hint_faults is way higher (sometimes close to 5 times)
> than numa_pte_updates. This doesn't make sense. Very rarely I do see
> saner numbers and when that happens the benchmark score is also much better.
>
> Looks like an issue with the default kernel itself. I will debug this
> further and get back.
Yes. It appears that something is wrong.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists