[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250414034607.762653-1-ankur.a.arora@oracle.com>
Date: Sun, 13 Apr 2025 20:46:03 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Cc: torvalds@...ux-foundation.org, akpm@...ux-foundation.org, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
luto@...nel.org, peterz@...radead.org, paulmck@...nel.org,
rostedt@...dmis.org, tglx@...utronix.de, willy@...radead.org,
jon.grimm@....com, bharata@....com, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
ankur.a.arora@...cle.com
Subject: [PATCH v3 0/4] mm/folio_zero_user: add multi-page clearing
This series adds multi-page clearing for hugepages. It is a rework
of [1] which took a detour through PREEMPT_LAZY [2].
Why multi-page clearing?: multi-page clearing improves upon the
current page-at-a-time approach by providing the processor with a
hint as to the real region size. A processor could use this hint to,
for instance, elide cacheline allocation when clearing a large
region.
This optimization in particular is done by REP; STOS on AMD Zen
where regions larger than L3-size use non-temporal stores.
This results in significantly better performance.
We also see performance improvement for cases where this optimization is
unavailable (pg-sz=2MB on AMD, and pg-sz=2MB|1GB on Intel) because
REP; STOS is typically microcoded which can now be amortized over
larger regions and the hint allows the hardware prefetcher to do a
better job.
Milan (EPYC 7J13, boost=0, preempt=full|lazy):
mm/folio_zero_user x86/folio_zero_user change
(GB/s +- stddev) (GB/s +- stddev)
pg-sz=1GB 16.51 +- 0.54% 42.80 +- 3.48% + 159.2%
pg-sz=2MB 11.89 +- 0.78% 16.12 +- 0.12% + 35.5%
Icelakex (Platinum 8358, no_turbo=1, preempt=full|lazy):
mm/folio_zero_user x86/folio_zero_user change
(GB/s +- stddev) (GB/s +- stddev)
pg-sz=1GB 8.01 +- 0.24% 11.26 +- 0.48% + 40.57%
pg-sz=2MB 7.95 +- 0.30% 10.90 +- 0.26% + 37.10%
Interaction with preemption: as discussed in [3], zeroing large
regions with string instructions doesn't work well with cooperative
preemption models which need regular invocations of cond_resched(). So,
this optimization is limited to only preemptible models (full, lazy).
This is done by overriding __folio_zero_user() -- which does the usual
page-at-a-time zeroing -- by an architecture optimized version but
only when running under preemptible models.
As such this ties an architecture specific optimization too closely
to preemption. Should be easy enough to change but seemed like the
simplest approach.
Comments appreciated!
Also at:
github.com/terminus/linux clear-pages-preempt.v1
[1] https://lore.kernel.org/lkml/20230830184958.2333078-1-ankur.a.arora@oracle.com/
[2] https://lore.kernel.org/lkml/87cyyfxd4k.ffs@tglx/
[3] https://lore.kernel.org/lkml/CAHk-=wj9En-BC4t7J9xFZOws5ShwaR9yor7FxHZr8CTVyEP_+Q@mail.gmail.com/
Ankur Arora (4):
x86/clear_page: extend clear_page*() for multi-page clearing
x86/clear_page: add clear_pages()
huge_page: allow arch override for folio_zero_user()
x86/folio_zero_user: multi-page clearing
arch/x86/include/asm/page_32.h | 6 ++++
arch/x86/include/asm/page_64.h | 27 +++++++++------
arch/x86/lib/clear_page_64.S | 52 +++++++++++++++++++++--------
arch/x86/mm/Makefile | 1 +
arch/x86/mm/memory.c | 60 ++++++++++++++++++++++++++++++++++
include/linux/mm.h | 1 +
mm/memory.c | 38 ++++++++++++++++++---
7 files changed, 156 insertions(+), 29 deletions(-)
create mode 100644 arch/x86/mm/memory.c
--
2.31.1
Powered by blists - more mailing lists