lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pqfokz55m6izzahl5jtbbhundrsjmbeaf3kmspo2q2oqv2hpcl@wdsabytutjv2>
Date: Sun, 11 Feb 2024 10:11:33 +0100
From: Paul Heidekrüger <paul.heidekrueger@....de>
To: Andrey Konovalov <andreyknvl@...il.com>
Cc: elver@...gle.com, akpm@...ux-foundation.org, dvyukov@...gle.com, 
	glider@...gle.com, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, ryabinin.a.a@...il.com, vincenzo.frascino@....com
Subject: Re: [PATCH] kasan: add atomic tests

On 05.02.2024 22:00, Andrey Konovalov wrote:
> On Fri, Feb 2, 2024 at 12:33 PM Paul Heidekrüger
> <paul.heidekrueger@....de> wrote:
> >
> > Test that KASan can detect some unsafe atomic accesses.
> >
> > As discussed in the linked thread below, these tests attempt to cover
> > the most common uses of atomics and, therefore, aren't exhaustive.
> >
> > CC: Marco Elver <elver@...gle.com>
> > CC: Andrey Konovalov <andreyknvl@...il.com>
> > Link: https://lore.kernel.org/all/20240131210041.686657-1-paul.heidekrueger@tum.de/T/#u
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> > Signed-off-by: Paul Heidekrüger <paul.heidekrueger@....de>
> > ---
> > Changes PATCH RFC v2 -> PATCH v1:
> > * Remove casts to void*
> > * Remove i_safe variable
> > * Add atomic_long_* test cases
> > * Carry over comment from kasan_bitops_tags()
> >
> > Changes PATCH RFC v1 -> PATCH RFC v2:
> > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > * Remove comments and move tests closer to the bitops tests
> > * For functions taking two addresses as an input, test each address in a separate function call.
> > * Rename variables for clarity
> > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> >
> >  mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 79 insertions(+)
> >
> > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> > index 8281eb42464b..4ef2280c322c 100644
> > --- a/mm/kasan/kasan_test.c
> > +++ b/mm/kasan/kasan_test.c
> > @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> >         kfree(bits);
> >  }
> >
> > +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > +{
> > +       int *i_unsafe = (int *)unsafe;
> > +
> > +       KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> > +
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > +
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> > +       KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> > +}
> > +
> > +static void kasan_atomics(struct kunit *test)
> > +{
> > +       void *a1, *a2;
> > +
> > +       /*
> > +        * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> > +        * that the following 16 bytes will make up the redzone.
> > +        */
> > +       a1 = kzalloc(48, GFP_KERNEL);
> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +       a2 = kzalloc(sizeof(int), GFP_KERNEL);
> 
> I think this should be sizeof(atomic_long_t) or sizeof(long),
> otherwise a2 will not work as the safe argument for
> atomic_long_try_cmpxchg on 64-bit architectures.

Ah, good catch!

> > +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +
> > +       /* Use atomics to access the redzone. */
> > +       kasan_atomics_helper(test, a1 + 48, a2);
> > +
> > +       kfree(a1);
> > +       kfree(a2);
> > +}
> > +
> >  static void kmalloc_double_kzfree(struct kunit *test)
> >  {
> >         char *ptr;
> > @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> >         KUNIT_CASE(kasan_strings),
> >         KUNIT_CASE(kasan_bitops_generic),
> >         KUNIT_CASE(kasan_bitops_tags),
> > +       KUNIT_CASE(kasan_atomics),
> >         KUNIT_CASE(kmalloc_double_kzfree),
> >         KUNIT_CASE(rcu_uaf),
> >         KUNIT_CASE(workqueue_uaf),
> > --
> > 2.40.1
> >

I'll be sending a v2 patch right away.

Thank you Marco, Mark, and Andrey!

Paul


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ