[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251113000932.1589073-14-willy@infradead.org>
Date: Thu, 13 Nov 2025 00:09:27 +0000
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>,
linux-mm@...ck.org,
Kees Cook <kees@...nel.org>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
linux-hardening@...r.kernel.org
Subject: [PATCH v4 13/16] usercopy: Remove folio references from check_heap_object()
Use page_slab() instead of virt_to_folio() followed by folio_slab().
We do end up calling compound_head() twice for non-slab copies, but that
will not be a problem once we allocate memdescs separately.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Kees Cook <kees@...nel.org>
Cc: "Gustavo A. R. Silva" <gustavoars@...nel.org>
Cc: linux-hardening@...r.kernel.org
---
mm/usercopy.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/mm/usercopy.c b/mm/usercopy.c
index dbdcc43964fb..5de7a518b1b1 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -164,7 +164,8 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
{
unsigned long addr = (unsigned long)ptr;
unsigned long offset;
- struct folio *folio;
+ struct page *page;
+ struct slab *slab;
if (is_kmap_addr(ptr)) {
offset = offset_in_page(ptr);
@@ -189,16 +190,23 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
if (!virt_addr_valid(ptr))
return;
- folio = virt_to_folio(ptr);
-
- if (folio_test_slab(folio)) {
+ page = virt_to_page(ptr);
+ slab = page_slab(page);
+ if (slab) {
/* Check slab allocator for flags and size. */
- __check_heap_object(ptr, n, folio_slab(folio), to_user);
- } else if (folio_test_large(folio)) {
- offset = ptr - folio_address(folio);
- if (n > folio_size(folio) - offset)
+ __check_heap_object(ptr, n, slab, to_user);
+ } else if (PageCompound(page)) {
+ page = compound_head(page);
+ offset = ptr - page_address(page);
+ if (n > page_size(page) - offset)
usercopy_abort("page alloc", NULL, to_user, offset, n);
}
+
+ /*
+ * We cannot check non-compound pages. They might be part of
+ * a large allocation, in which case crossing a page boundary
+ * is fine.
+ */
}
DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_HARDENED_USERCOPY_DEFAULT_ON,
--
2.47.2
Powered by blists - more mailing lists