lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 6 Oct 2021 12:38:36 +0100 From: Mark Rutland <mark.rutland@....com> To: Kees Cook <keescook@...omium.org> Cc: Andrew Morton <akpm@...ux-foundation.org>, Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org Subject: Re: [PATCH] kasan: test: Bypass __alloc_size checks Hi Kees, On Tue, Oct 05, 2021 at 08:55:22PM -0700, Kees Cook wrote: > Intentional overflows, as performed by the KASAN tests, are detected > at compile time[1] (instead of only at run-time) with the addition of > __alloc_size. Fix this by forcing the compiler into not being able to > trust the size used following the kmalloc()s. It might be better to use OPTIMIZER_HIDE_VAR(), since that's intended to make the value opaque to the compiler, and volatile might not always do that depending on how the compiler tracks the variable. Thanks, Mark. > > [1] https://lore.kernel.org/lkml/20211005184717.65c6d8eb39350395e387b71f@linux-foundation.org > > Cc: Andrey Ryabinin <ryabinin.a.a@...il.com> > Cc: Alexander Potapenko <glider@...gle.com> > Cc: Andrey Konovalov <andreyknvl@...il.com> > Cc: Dmitry Vyukov <dvyukov@...gle.com> > Cc: kasan-dev@...glegroups.com > Signed-off-by: Kees Cook <keescook@...omium.org> > --- > lib/test_kasan.c | 10 +++++----- > lib/test_kasan_module.c | 2 +- > 2 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c > index 8835e0784578..0e1f8d5281b4 100644 > --- a/lib/test_kasan.c > +++ b/lib/test_kasan.c > @@ -435,7 +435,7 @@ static void kmalloc_uaf_16(struct kunit *test) > static void kmalloc_oob_memset_2(struct kunit *test) > { > char *ptr; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > > ptr = kmalloc(size, GFP_KERNEL); > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > @@ -447,7 +447,7 @@ static void kmalloc_oob_memset_2(struct kunit *test) > static void kmalloc_oob_memset_4(struct kunit *test) > { > char *ptr; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > > ptr = kmalloc(size, GFP_KERNEL); > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > @@ -459,7 +459,7 @@ static void kmalloc_oob_memset_4(struct kunit *test) > static void kmalloc_oob_memset_8(struct kunit *test) > { > char *ptr; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > > ptr = kmalloc(size, GFP_KERNEL); > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > @@ -471,7 +471,7 @@ static void kmalloc_oob_memset_8(struct kunit *test) > static void kmalloc_oob_memset_16(struct kunit *test) > { > char *ptr; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > > ptr = kmalloc(size, GFP_KERNEL); > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > @@ -483,7 +483,7 @@ static void kmalloc_oob_memset_16(struct kunit *test) > static void kmalloc_oob_in_memset(struct kunit *test) > { > char *ptr; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > > ptr = kmalloc(size, GFP_KERNEL); > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > diff --git a/lib/test_kasan_module.c b/lib/test_kasan_module.c > index 7ebf433edef3..c8cc77b1dcf3 100644 > --- a/lib/test_kasan_module.c > +++ b/lib/test_kasan_module.c > @@ -19,7 +19,7 @@ static noinline void __init copy_user_test(void) > { > char *kmem; > char __user *usermem; > - size_t size = 128 - KASAN_GRANULE_SIZE; > + volatile size_t size = 128 - KASAN_GRANULE_SIZE; > int __maybe_unused unused; > > kmem = kmalloc(size, GFP_KERNEL); > -- > 2.30.2 >
Powered by blists - more mailing lists