[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o6uny25j.fsf@oracle.com>
Date: Mon, 16 Jun 2025 11:25:28 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, x86@...nel.org, akpm@...ux-foundation.org,
bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
mingo@...hat.com, mjguzik@...il.com, luto@...nel.org,
peterz@...radead.org, acme@...nel.org, namhyung@...nel.org,
tglx@...utronix.de, willy@...radead.org, jon.grimm@....com,
bharata@....com, raghavendra.kt@....com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH v4 00/13] x86/mm: Add multi-page clearing
Dave Hansen <dave.hansen@...el.com> writes:
> On 6/15/25 22:22, Ankur Arora wrote:
>> This series adds multi-page clearing for hugepages, improving on the
>> current page-at-a-time approach in two ways:
>>
>> - amortizes the per-page setup cost over a larger extent
>> - when using string instructions, exposes the real region size to the
>> processor. A processor could use that as a hint to optimize based
>> on the full extent size. AMD Zen uarchs, as an example, elide
>> allocation of cachelines for regions larger than L3-size.
>
> Have you happened to do any testing outside of 'perf bench'?
Yeah. My original tests were with qemu creating a pinned guest (where it
would go and touch pages after allocation.)
I think perf bench is a reasonably good test is because a lot of demand
faulting often just boils down to the same kind of loop. And of course
MAP_POPULATE is essentially equal to the clearing loop in the kernel.
I'm happy to try other tests if you have some in mind.
And, thanks for the quick comments!
--
ankur
Powered by blists - more mailing lists