[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260107072009.1615991-9-ankur.a.arora@oracle.com>
Date: Tue, 6 Jan 2026 23:20:09 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Cc: akpm@...ux-foundation.org, david@...nel.org, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
mjguzik@...il.com, luto@...nel.org, peterz@...radead.org,
tglx@...utronix.de, willy@...radead.org, raghavendra.kt@....com,
chleroy@...nel.org, ioworker0@...il.com, lizhe.67@...edance.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
ankur.a.arora@...cle.com
Subject: [PATCH v11 8/8] mm: folio_zero_user: cache neighbouring pages
folio_zero_user() does straight zeroing without caring about
temporal locality for caches.
This replaced commit c6ddfb6c5890 ("mm, clear_huge_page: move order
algorithm into a separate function") where we cleared a page at a
time converging to the faulting page from the left and the right.
To retain limited temporal locality, split the clearing in three
parts: the faulting page and its immediate neighbourhood, and the
regions on its left and right. We clear the local neighbourhood last
to maximize chances of it sticking around in the cache.
Performance
===
AMD Genoa (EPYC 9J14, cpus=2 sockets * 96 cores * 2 threads,
memory=2.2 TB, L1d=16K/thread, L2=512K/thread, L3=2MB/thread)
vm-scalability/anon-w-seq-hugetlb: this workload runs with 384 processes
(one for each CPU) each zeroing anonymously mapped hugetlb memory which
is then accessed sequentially.
stime utime
discontiguous-page 1739.93 ( +- 6.15% ) 1016.61 ( +- 4.75% )
contiguous-page 1853.70 ( +- 2.51% ) 1187.13 ( +- 3.50% )
batched-pages 1756.75 ( +- 2.98% ) 1133.32 ( +- 4.89% )
neighbourhood-last 1725.18 ( +- 4.59% ) 1123.78 ( +- 7.38% )
Both stime and utime largely respond somewhat expectedly. There is a
fair amount of run to run variation but the general trend is that the
stime drops and utime increases. There are a few oddities, like
contiguous-page performing very differently from batched-pages.
As such this is likely an uncommon pattern where we saturate the memory
bandwidth (since all CPUs are running the test) and at the same time
are cache constrained because we access the entire region.
Kernel make (make -j 12 bzImage):
stime utime
discontiguous-page 199.29 ( +- 0.63% ) 1431.67 ( +- .04% )
contiguous-page 193.76 ( +- 0.58% ) 1433.60 ( +- .05% )
batched-pages 193.92 ( +- 0.76% ) 1431.04 ( +- .08% )
neighbourhood-last 194.46 ( +- 0.68% ) 1431.51 ( +- .06% )
For make the utime stays relatively flat with a fairly small (-2.4%)
improvement in the stime.
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
Reviewed-by: Raghavendra K T <raghavendra.kt@....com>
Tested-by: Raghavendra K T <raghavendra.kt@....com>
---
mm/memory.c | 41 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 49e7154121f5..a27ef2eb92db 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7262,6 +7262,15 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
}
}
+/*
+ * When zeroing a folio, we want to differentiate between pages in the
+ * vicinity of the faulting address where we have spatial and temporal
+ * locality, and those far away where we don't.
+ *
+ * Use a radius of 2 for determining the local neighbourhood.
+ */
+#define FOLIO_ZERO_LOCALITY_RADIUS 2
+
/**
* folio_zero_user - Zero a folio which will be mapped to userspace.
* @folio: The folio to zero.
@@ -7269,10 +7278,36 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
*/
void folio_zero_user(struct folio *folio, unsigned long addr_hint)
{
- unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+ const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+ const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
+ const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
+ const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+ struct range r[3];
+ int i;
- clear_contig_highpages(folio_page(folio, 0),
- base_addr, folio_nr_pages(folio));
+ /*
+ * Faulting page and its immediate neighbourhood. Will be cleared at the
+ * end to keep its cachelines hot.
+ */
+ r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
+ clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+
+ /* Region to the left of the fault */
+ r[1] = DEFINE_RANGE(pg.start,
+ clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+
+ /* Region to the right of the fault: always valid for the common fault_idx=0 case. */
+ r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
+ pg.end);
+
+ for (i = 0; i < ARRAY_SIZE(r); i++) {
+ const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
+ const unsigned int nr_pages = range_len(&r[i]);
+ struct page *page = folio_page(folio, r[i].start);
+
+ if (nr_pages > 0)
+ clear_contig_highpages(page, addr, nr_pages);
+ }
}
static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
--
2.31.1
Powered by blists - more mailing lists