[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221129063358.3012362-2-feng.tang@intel.com>
Date: Tue, 29 Nov 2022 14:33:58 +0800
From: Feng Tang <feng.tang@...el.com>
To: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Oliver Glitta <glittao@...il.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Marco Elver <elver@...gle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Feng Tang <feng.tang@...el.com>
Subject: [PATCH v2 2/2] mm/slub, kunit: Add a test case for kmalloc redzone check
kmalloc redzone check for slub has been merged, and it's better to add
a kunit case for it, which is inspired by a real-world case as described
in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
"
octeon-hcd will crash the kernel when SLOB is used. This usually happens
after the 18-byte control transfer when a device descriptor is read.
The DMA engine is always transferring full 32-bit words and if the
transfer is shorter, some random garbage appears after the buffer.
The problem is not visible with SLUB since it rounds up the allocations
to word boundary, and the extra bytes will go undetected.
"
To avoid interrupting the normal functioning of kmalloc caches, a
kmem_cache mimicing kmalloc cache is created with similar and all
necessary flags to have kmalloc-redzone enabled, and kmalloc_trace()
is used to really test the orig_size and redzone setup.
Suggested-by: Vlastimil Babka <vbabka@...e.cz>
Signed-off-by: Feng Tang <feng.tang@...el.com>
---
Changelog:
since v1:
* create a new cache mimicing kmalloc cache, reduce dependency
over global slub_debug setting (Vlastimil Babka)
lib/slub_kunit.c | 23 +++++++++++++++++++++++
mm/slab.h | 3 ++-
2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
index a303adf8f11c..dbdd656624d0 100644
--- a/lib/slub_kunit.c
+++ b/lib/slub_kunit.c
@@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test)
kmem_cache_destroy(s);
}
+static void test_kmalloc_redzone_access(struct kunit *test)
+{
+ struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0,
+ SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS,
+ NULL);
+ u8 *p = kmalloc_trace(s, GFP_KERNEL, 18);
+
+ kasan_disable_current();
+
+ /* Suppress the -Warray-bounds warning */
+ OPTIMIZER_HIDE_VAR(p);
+ p[18] = 0xab;
+ p[19] = 0xab;
+
+ kmem_cache_free(s, p);
+ validate_slab_cache(s);
+ KUNIT_EXPECT_EQ(test, 2, slab_errors);
+
+ kasan_enable_current();
+ kmem_cache_destroy(s);
+}
+
static int test_init(struct kunit *test)
{
slab_errors = 0;
@@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = {
#endif
KUNIT_CASE(test_clobber_redzone_free),
+ KUNIT_CASE(test_kmalloc_redzone_access),
{}
};
diff --git a/mm/slab.h b/mm/slab.h
index c71590f3a22b..b6cd98b16ba7 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
/* Legal flag mask for kmem_cache_create(), for various configurations */
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_PANIC | \
- SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
+ SLAB_KMALLOC | SLAB_SKIP_KFENCE | \
+ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS)
#if defined(CONFIG_DEBUG_SLAB)
#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
--
2.34.1
Powered by blists - more mailing lists