[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260112145150.3259084-1-glider@google.com>
Date: Mon, 12 Jan 2026 15:51:50 +0100
From: Alexander Potapenko <glider@...gle.com>
To: glider@...gle.com
Cc: akpm@...ux-foundation.org, ryan.roberts@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, elver@...gle.com, dvyukov@...gle.com,
kasan-dev@...glegroups.com
Subject: [PATCH v1] mm: kmsan: add tests for high-order page freeing
Add regression tests to verify that KMSAN correctly poisons the full memory
range when freeing pages.
Specifically, verify that accessing the tail pages of a high-order
non-compound allocation triggers a use-after-free report. This ensures
that the fix "mm: kmsan: Fix poisoning of high-order non-compound pages"
is working as expected.
Also add a test for standard order-0 pages for completeness.
Link: https://lore.kernel.org/all/20260104134348.3544298-1-ryan.roberts@arm.com/
Signed-off-by: Alexander Potapenko <glider@...gle.com>
---
mm/kmsan/kmsan_test.c | 48 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
index 902ec48b1e3e6..25cfba0db2cfb 100644
--- a/mm/kmsan/kmsan_test.c
+++ b/mm/kmsan/kmsan_test.c
@@ -361,7 +361,7 @@ static void test_init_vmalloc(struct kunit *test)
KUNIT_EXPECT_TRUE(test, report_matches(&expect));
}
-/* Test case: ensure that use-after-free reporting works. */
+/* Test case: ensure that use-after-free reporting works for kmalloc. */
static void test_uaf(struct kunit *test)
{
EXPECTATION_USE_AFTER_FREE(expect);
@@ -378,6 +378,50 @@ static void test_uaf(struct kunit *test)
KUNIT_EXPECT_TRUE(test, report_matches(&expect));
}
+/* Test case: ensure that use-after-free reporting works for freed pages. */
+static void test_uaf_pages(struct kunit *test)
+{
+ EXPECTATION_USE_AFTER_FREE(expect);
+ const int order = 0;
+ volatile char value;
+ struct page *page;
+ volatile char *var;
+
+ kunit_info(test, "use-after-free on a freed page (UMR report)\n");
+
+ /* Memory is initialized up until __free_pages() thanks to __GFP_ZERO. */
+ page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ var = page_address(page);
+ __free_pages(page, order);
+
+ /* Copy the invalid value before checking it. */
+ value = var[3];
+ USE(value);
+ KUNIT_EXPECT_TRUE(test, report_matches(&expect));
+}
+
+/* Test case: ensure that use-after-free reporting works for alloc_pages. */
+static void test_uaf_high_order_pages(struct kunit *test)
+{
+ EXPECTATION_USE_AFTER_FREE(expect);
+ const int order = 1;
+ volatile char value;
+ struct page *page;
+ volatile char *var;
+
+ kunit_info(test,
+ "use-after-free on a freed high-order page (UMR report)\n");
+
+ page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ var = page_address(page) + PAGE_SIZE;
+ __free_pages(page, order);
+
+ /* Copy the invalid value before checking it. */
+ value = var[3];
+ USE(value);
+ KUNIT_EXPECT_TRUE(test, report_matches(&expect));
+}
+
/*
* Test case: ensure that uninitialized values are propagated through per-CPU
* memory.
@@ -683,6 +727,8 @@ static struct kunit_case kmsan_test_cases[] = {
KUNIT_CASE(test_init_kmsan_vmap_vunmap),
KUNIT_CASE(test_init_vmalloc),
KUNIT_CASE(test_uaf),
+ KUNIT_CASE(test_uaf_pages),
+ KUNIT_CASE(test_uaf_high_order_pages),
KUNIT_CASE(test_percpu_propagate),
KUNIT_CASE(test_printk),
KUNIT_CASE(test_init_memcpy),
--
2.52.0.457.g6b5491de43-goog
Powered by blists - more mailing lists