[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260108060406.1693853-1-ankur.a.arora@oracle.com>
Date: Wed, 7 Jan 2026 22:04:06 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Cc: akpm@...ux-foundation.org, david@...nel.org, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
mjguzik@...il.com, luto@...nel.org, peterz@...radead.org,
tglx@...utronix.de, willy@...radead.org, raghavendra.kt@....com,
chleroy@...nel.org, ioworker0@...il.com, lizhe.67@...edance.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
ankur.a.arora@...cle.com
Subject: [PATCH] mm: folio_zero_user: (fixup) cache page ranges
Move the unit computation and make it a const. Also, clean up the
comment a little bit.
Use SZ_32M to define PROCESS_PAGES_NON_PREEMPT_BATCH instead
of hand coding the computation.
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
Hi Andrew
Could you fixup patch-7 "mm: folio_zero_user: clear page ranges" with
this patch?
Thanks
Ankur
---
include/linux/mm.h | 2 +-
mm/memory.c | 20 ++++++++++----------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c1ff832c33b5..e8bb09816fbf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4238,7 +4238,7 @@ static inline void clear_pages(void *addr, unsigned int npages)
* (See comment above clear_pages() for why preemption latency is a concern
* here.)
*/
-#define PROCESS_PAGES_NON_PREEMPT_BATCH (32 << (20 - PAGE_SHIFT))
+#define PROCESS_PAGES_NON_PREEMPT_BATCH (SZ_32M >> PAGE_SHIFT)
#else /* !clear_pages */
/*
* The architecture does not provide a clear_pages() implementation. Assume
diff --git a/mm/memory.c b/mm/memory.c
index 11ad1db61929..f80c67eba79f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7240,19 +7240,19 @@ static inline int process_huge_page(
static void clear_contig_highpages(struct page *page, unsigned long addr,
unsigned int nr_pages)
{
- unsigned int i, unit, count;
-
- might_sleep();
+ unsigned int i, count;
/*
- * When clearing we want to operate on the largest extent possible since
- * that allows for extent based architecture specific optimizations.
+ * When clearing we want to operate on the largest extent possible to
+ * allow for architecture specific extent based optimizations.
*
- * However, since the clearing interfaces (clear_user_highpages(),
- * clear_user_pages(), clear_pages()), do not call cond_resched(), we
- * limit the batch size when running under non-preemptible scheduling
- * models.
+ * However, since clear_user_highpages() (and primitives clear_user_pages(),
+ * clear_pages()), do not call cond_resched(), limit the unit size when
+ * running under non-preemptible scheduling models.
*/
- unit = preempt_model_preemptible() ? nr_pages : PROCESS_PAGES_NON_PREEMPT_BATCH;
+ const unsigned int unit = preempt_model_preemptible() ?
+ nr_pages : PROCESS_PAGES_NON_PREEMPT_BATCH;
+
+ might_sleep();
for (i = 0; i < nr_pages; i += count) {
cond_resched();
--
2.31.1
Powered by blists - more mailing lists