lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 20 Oct 2021 12:38:07 -0700 From: Kees Cook <keescook@...omium.org> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Kees Cook <keescook@...omium.org>, Arnd Bergmann <arnd@...db.de>, Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org Subject: [PATCH] kasan: test: Consolidate workarounds for unwanted __alloc_size() protection This fixes kasan-test-use-underlying-string-helpers.patch to avoid needing new helpers. As done in kasan-test-bypass-__alloc_size-checks.patch, just use OPTIMIZER_HIDE_VAR(). Additionally converts a use of "volatile", which was trying to work around similar detection. Cc: Arnd Bergmann <arnd@...db.de> Cc: Andrey Ryabinin <ryabinin.a.a@...il.com> Cc: Alexander Potapenko <glider@...gle.com> Cc: Andrey Konovalov <andreyknvl@...il.com> Cc: Dmitry Vyukov <dvyukov@...gle.com> Cc: kasan-dev@...glegroups.com Signed-off-by: Kees Cook <keescook@...omium.org> --- Hi Andrew, Can you please collapse this into your series? It's cleaner to use the same method everywhere in this file to avoid the compiler being smart. :) Thanks! -Kees --- lib/test_kasan.c | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 318fc612e7e7..96a1f085b460 100644 --- a/lib/test_kasan.c +++ b/lib/test_kasan.c @@ -525,12 +525,13 @@ static void kmalloc_memmove_invalid_size(struct kunit *test) { char *ptr; size_t size = 64; - volatile size_t invalid_size = size; + size_t invalid_size = size; ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); memset((char *)ptr, 0, 64); + OPTIMIZER_HIDE_VAR(invalid_size); KUNIT_EXPECT_KASAN_FAIL(test, memmove((char *)ptr, (char *)ptr + 4, invalid_size)); kfree(ptr); @@ -852,21 +853,6 @@ static void kmem_cache_invalid_free(struct kunit *test) kmem_cache_destroy(cache); } -/* - * noinline wrappers to prevent the compiler from noticing the overflow - * at compile time rather than having kasan catch it. - */ -static noinline void *__kasan_memchr(const void *s, int c, size_t n) -{ - return memchr(s, c, n); -} - -static noinline int __kasan_memcmp(const void *s1, const void *s2, size_t n) -{ - return memcmp(s1, s2, n); -} - - static void kasan_memchr(struct kunit *test) { char *ptr; @@ -884,8 +870,9 @@ static void kasan_memchr(struct kunit *test) ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + OPTIMIZER_HIDE_VAR(size); KUNIT_EXPECT_KASAN_FAIL(test, - kasan_ptr_result = __kasan_memchr(ptr, '1', size + 1)); + kasan_ptr_result = memchr(ptr, '1', size + 1)); kfree(ptr); } @@ -909,8 +896,9 @@ static void kasan_memcmp(struct kunit *test) KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); memset(arr, 0, sizeof(arr)); + OPTIMIZER_HIDE_VAR(size); KUNIT_EXPECT_KASAN_FAIL(test, - kasan_int_result = __kasan_memcmp(ptr, arr, size+1)); + kasan_int_result = memcmp(ptr, arr, size+1)); kfree(ptr); } -- 2.30.2
Powered by blists - more mailing lists