lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 18 Feb 2016 18:16:01 +0100
From:	Alexander Potapenko <glider@...gle.com>
To:	adech.fo@...il.com, cl@...ux.com, dvyukov@...gle.com,
	akpm@...ux-foundation.org, ryabinin.a.a@...il.com,
	rostedt@...dmis.org, iamjoonsoo.kim@....com, js1304@...il.com,
	kcc@...gle.com
Cc:	kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: [PATCH v2 1/7] kasan: Modify kmalloc_large_oob_right(), add kmalloc_pagealloc_oob_right()

This change renames kmalloc_large_oob_right() to
kmalloc_pagealloc_oob_right(), as the test only checks the page allocator
functionality.
It also reimplements kmalloc_large_oob_right() so that the test allocates
a large enough chunk of memory that still does not trigger the page
allocator fallback.

Signed-off-by: Alexander Potapenko <glider@...gle.com>
---
v2: - Merged "kasan: Change the behavior of kmalloc_large_oob_right" and
  "kasan: Changed kmalloc_large_oob_right, added kmalloc_pagealloc_oob_right"
  from v1
---
 lib/test_kasan.c | 28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index c32f3b0..90ad74f 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -65,11 +65,34 @@ static noinline void __init kmalloc_node_oob_right(void)
 	kfree(ptr);
 }
 
-static noinline void __init kmalloc_large_oob_right(void)
+#ifdef CONFIG_SLUB
+static noinline void __init kmalloc_pagealloc_oob_right(void)
 {
 	char *ptr;
 	size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
 
+	/* Allocate a chunk that does not fit into a SLUB cache to trigger
+	 * the page allocator fallback.
+	 */
+	pr_info("kmalloc pagealloc allocation: out-of-bounds to right\n");
+	ptr = kmalloc(size, GFP_KERNEL);
+	if (!ptr) {
+		pr_err("Allocation failed\n");
+		return;
+	}
+
+	ptr[size] = 0;
+	kfree(ptr);
+}
+#endif
+
+static noinline void __init kmalloc_large_oob_right(void)
+{
+	char *ptr;
+	size_t size = KMALLOC_MAX_CACHE_SIZE - 256;
+	/* Allocate a chunk that is large enough, but still fits into a slab
+	 * and does not trigger the page allocator fallback in SLUB.
+	 */
 	pr_info("kmalloc large allocation: out-of-bounds to right\n");
 	ptr = kmalloc(size, GFP_KERNEL);
 	if (!ptr) {
@@ -324,6 +347,9 @@ static int __init kmalloc_tests_init(void)
 	kmalloc_oob_right();
 	kmalloc_oob_left();
 	kmalloc_node_oob_right();
+#ifdef CONFIG_SLUB
+	kmalloc_pagealloc_oob_right();
+#endif
 	kmalloc_large_oob_right();
 	kmalloc_oob_krealloc_more();
 	kmalloc_oob_krealloc_less();
-- 
2.7.0.rc3.207.g0ac5344

Powered by blists - more mailing lists