[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <0c51a7266ea851797dc9816405fc40d860a48db1.1609871239.git.andreyknvl@google.com>
Date: Tue, 5 Jan 2021 19:27:53 +0100
From: Andrey Konovalov <andreyknvl@...gle.com>
To: Catalin Marinas <catalin.marinas@....com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Will Deacon <will.deacon@....com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Kevin Brodsky <kevin.brodsky@....com>,
kasan-dev@...glegroups.com, linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH 09/11] kasan: fix memory corruption in kasan_bitops_tags test
Since the hardware tag-based KASAN mode might not have a redzone that
comes after an allocated object (when kasan.mode=prod is enabled), the
kasan_bitops_tags() test ends up corrupting the next object in memory.
Change the test so it always accesses the redzone that lies within the
allocated object's boundaries.
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
Link: https://linux-review.googlesource.com/id/I67f51d1ee48f0a8d0fe2658c2a39e4879fe0832a
---
lib/test_kasan.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index b67da7f6e17f..3ea52da52714 100644
--- a/lib/test_kasan.c
+++ b/lib/test_kasan.c
@@ -771,17 +771,17 @@ static void kasan_bitops_tags(struct kunit *test)
/* This test is specifically crafted for the tag-based mode. */
if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
- kunit_info(test, "skipping, CONFIG_KASAN_SW_TAGS required");
+ kunit_info(test, "skipping, CONFIG_KASAN_SW/HW_TAGS required");
return;
}
- /* Allocation size will be rounded to up granule size, which is 16. */
- bits = kzalloc(sizeof(*bits), GFP_KERNEL);
+ /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */
+ bits = kzalloc(48, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);
- /* Do the accesses past the 16 allocated bytes. */
- kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]);
- kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]);
+ /* Do the accesses past the 48 allocated bytes, but within the redone. */
+ kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48);
+ kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48);
kfree(bits);
}
--
2.29.2.729.g45daf8777d-goog
Powered by blists - more mailing lists