[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <93b2f5eb-362c-49b7-9d90-01d250c9b6ff@kernel.org>
Date: Mon, 10 Nov 2025 09:57:07 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org,
akpm@...ux-foundation.org, bp@...en8.de, dave.hansen@...ux.intel.com,
hpa@...or.com, mingo@...hat.com, mjguzik@...il.com, luto@...nel.org,
peterz@...radead.org, acme@...nel.org, namhyung@...nel.org,
tglx@...utronix.de, willy@...radead.org, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com
Subject: Re: [PATCH v8 6/7] mm, folio_zero_user: support clearing page ranges
On 10.11.25 08:20, Ankur Arora wrote:
>
> David Hildenbrand (Red Hat) <david@...nel.org> writes:
>
>> On 27.10.25 21:21, Ankur Arora wrote:
>>> Clear contiguous page ranges in folio_zero_user() instead of clearing
>>> a page-at-a-time. This enables CPU specific optimizations based on
>>> the length of the region.
>>> Operating on arbitrarily large regions can lead to high preemption
>>> latency under cooperative preemption models. So, limit the worst
>>> case preemption latency via architecture specified PAGE_CONTIG_NR
>>> units.
>>> The resultant performance depends on the kinds of optimizations
>>> available to the CPU for the region being cleared. Two classes of
>>> of optimizations:
>>> - clearing iteration costs can be amortized over a range larger
>>> than a single page.
>>> - cacheline allocation elision (seen on AMD Zen models).
>>> Testing a demand fault workload shows an improved baseline from the
>>> first optimization and a larger improvement when the region being
>>> cleared is large enough for the second optimization.
>>> AMD Milan (EPYC 7J13, boost=0, region=64GB on the local NUMA node):
>>> $ perf bench mem map -p $pg-sz -f demand -s 64GB -l 5
>>> page-at-a-time contiguous clearing change
>>> (GB/s +- %stdev) (GB/s +- %stdev)
>>> pg-sz=2MB 12.92 +- 2.55% 17.03 +- 0.70% + 31.8%
>>> preempt=*
>>> pg-sz=1GB 17.14 +- 2.27% 18.04 +- 1.05% [#] + 5.2%
>>> preempt=none|voluntary
>>> pg-sz=1GB 17.26 +- 1.24% 42.17 +- 4.21% +144.3% preempt=full|lazy
>>> [#] AMD Milan uses a threshold of LLC-size (~32MB) for eliding cacheline
>>> allocation, which is larger than ARCH_PAGE_CONTIG_NR, so
>>> preempt=none|voluntary see no improvement on the pg-sz=1GB.
>>> Also as mentioned earlier, the baseline improvement is not specific to
>>> AMD Zen platforms. Intel Icelakex (pg-sz=2MB|1GB) sees a similar
>>> improvement as the Milan pg-sz=2MB workload above (~30%).
>>> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
>>> Reviewed-by: Raghavendra K T <raghavendra.kt@....com>
>>> Tested-by: Raghavendra K T <raghavendra.kt@....com>
>>> ---
>>> include/linux/mm.h | 6 ++++++
>>> mm/memory.c | 42 +++++++++++++++++++++---------------------
>>> 2 files changed, 27 insertions(+), 21 deletions(-)
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index ecbcb76df9de..02db84667f97 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -3872,6 +3872,12 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>>> unsigned int order) {}
>>> #endif /* CONFIG_DEBUG_PAGEALLOC */
>>> +#ifndef ARCH_PAGE_CONTIG_NR
>>> +#define PAGE_CONTIG_NR 1
>>> +#else
>>> +#define PAGE_CONTIG_NR ARCH_PAGE_CONTIG_NR
>>> +#endif
>>
>> The name is a bit misleading. We need something that tells us that this is for
>> patch-processing (clearing? maybe alter copying?) contig pages. Likely spelling
>> out that this is for the non-preemptible case only.
>>
>> I assume we can drop the "CONTIG", just like clear_pages() doesn't contain it
>> etc.
>>
>> CLEAR_PAGES_NON_PREEMPT_BATCH
>>
>> PROCESS_PAGES_NON_PREEMPT_BATCH
>
> I think this version is clearer. And would be viable for copying as well.
>
>> Can you remind me again why this is arch specific, and why the default is 1
>> instead of, say 2,4,8 ... ?
>
> So, the only use for this value is to decide a reasonable frequency
> for calling cond_resched() when operating on hugepages.
>
> And the idea was the arch was best placed to have a reasonably safe
> value based on the expected spread of bandwidths it might see across
> uarchs. And the default choice of 1 was to keep it close to what we
> have now.
>
> Thinking about it now though, maybe it is better to instead do this
> in common code. We could have two sets of defines,
> PROCESS_PAGES_NON_PREEMPT_BATCH_{LARGE,SMALL}, the first for archs
> that define __HAVE_ARCH_CLEAR_PAGES and the second, without.
Right, avoiding this dependency on arch code would be nice.
Also, it feels like something we can later optimize for archs without
__HAVE_ARCH_CLEAR_PAGES in common code.
--
Cheers
David
Powered by blists - more mailing lists