[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20251121202352.494700-3-ankur.a.arora@oracle.com>
Date: Fri, 21 Nov 2025 12:23:47 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Cc: akpm@...ux-foundation.org, david@...nel.org, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
mjguzik@...il.com, luto@...nel.org, peterz@...radead.org,
tglx@...utronix.de, willy@...radead.org, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
ankur.a.arora@...cle.com
Subject: [PATCH v9 2/7] mm: introduce clear_pages() and clear_user_pages()
Introduce clear_pages(), to be overridden by architectures that
support more efficient clearing of consecutive pages.
Also introduce clear_user_pages(), however, we will not expect
this function to be overridden anytime soon.
We have to place the clear_user_pages() variant that uses
clear_user_page() into mm/util.c for now to work around
macro magic on sparc and m68k.
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>
---
Notes:
- Use macros clear_pages, clear_user_page, instead of __HAVE_ARCH_CLEAR_PAGES,
__HAVE_ARCH_CLEAR_USER_PAGE.
include/linux/mm.h | 41 +++++++++++++++++++++++++++++++++++++++++
mm/util.c | 13 +++++++++++++
2 files changed, 54 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6fa6c188f99a..c397ee2f6dd5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3879,6 +3879,26 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
unsigned int order) {}
#endif /* CONFIG_DEBUG_PAGEALLOC */
+#ifndef clear_pages
+/**
+ * clear_pages() - clear a page range for kernel-internal use.
+ * @addr: start address
+ * @npages: number of pages
+ *
+ * Use clear_user_pages() instead when clearing a page range to be
+ * mapped to user space.
+ *
+ * Does absolutely no exception handling.
+ */
+static inline void clear_pages(void *addr, unsigned int npages)
+{
+ do {
+ clear_page(addr);
+ addr += PAGE_SIZE;
+ } while (--npages);
+}
+#endif
+
#ifndef clear_user_page
/**
* clear_user_page() - clear a page to be mapped to user space
@@ -3901,6 +3921,27 @@ static inline void clear_user_page(void *addr, unsigned long vaddr, struct page
}
#endif
+/**
+ * clear_user_pages() - clear a page range to be mapped to user space
+ * @addr: start address
+ * @vaddr: start address of the user mapping
+ * @page: start page
+ * @npages: number of pages
+ *
+ * Assumes that the region (@addr, +@...ges) has been validated
+ * already so this does no exception handling.
+ */
+#ifdef clear_user_pages
+void clear_user_pages(void *addr, unsigned long vaddr,
+ struct page *page, unsigned int npages);
+#else
+static inline void clear_user_pages(void *addr, unsigned long vaddr,
+ struct page *page, unsigned int npages)
+{
+ clear_pages(addr, npages);
+}
+#endif
+
#ifdef __HAVE_ARCH_GATE_AREA
extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
extern int in_gate_area_no_mm(unsigned long addr);
diff --git a/mm/util.c b/mm/util.c
index 8989d5767528..3c6cd44db1bd 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1344,3 +1344,16 @@ bool page_range_contiguous(const struct page *page, unsigned long nr_pages)
}
EXPORT_SYMBOL(page_range_contiguous);
#endif
+
+#ifdef clear_user_page
+void clear_user_pages(void *addr, unsigned long vaddr,
+ struct page *page, unsigned int npages)
+{
+ do {
+ clear_user_page(addr, vaddr, page);
+ addr += PAGE_SIZE;
+ vaddr += PAGE_SIZE;
+ page++;
+ } while (--npages);
+}
+#endif
--
2.31.1
Powered by blists - more mailing lists