lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67e6ebce-f8cc-7d28-5e85-8a3909c2d180@suse.cz>
Date:   Tue, 29 Nov 2022 12:01:05 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Marco Elver <elver@...gle.com>, Feng Tang <feng.tang@...el.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Oliver Glitta <glittao@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm/slub, kunit: Add a test case for kmalloc
 redzone check

On 11/29/22 10:31, Marco Elver wrote:
> On Tue, 29 Nov 2022 at 07:37, Feng Tang <feng.tang@...el.com> wrote:
>>
>> kmalloc redzone check for slub has been merged, and it's better to add
>> a kunit case for it, which is inspired by a real-world case as described
>> in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
>>
>> "
>>   octeon-hcd will crash the kernel when SLOB is used. This usually happens
>>   after the 18-byte control transfer when a device descriptor is read.
>>   The DMA engine is always transferring full 32-bit words and if the
>>   transfer is shorter, some random garbage appears after the buffer.
>>   The problem is not visible with SLUB since it rounds up the allocations
>>   to word boundary, and the extra bytes will go undetected.
>> "
>>
>> To avoid interrupting the normal functioning of kmalloc caches, a
>> kmem_cache mimicing kmalloc cache is created with similar and all
>> necessary flags to have kmalloc-redzone enabled, and kmalloc_trace()
>> is used to really test the orig_size and redzone setup.
>>
>> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
>> Signed-off-by: Feng Tang <feng.tang@...el.com>
>> ---
>> Changelog:
>>
>>   since v1:
>>   * create a new cache mimicing kmalloc cache, reduce dependency
>>     over global slub_debug setting (Vlastimil Babka)
>>
>>  lib/slub_kunit.c | 23 +++++++++++++++++++++++
>>  mm/slab.h        |  3 ++-
>>  2 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
>> index a303adf8f11c..dbdd656624d0 100644
>> --- a/lib/slub_kunit.c
>> +++ b/lib/slub_kunit.c
>> @@ -122,6 +122,28 @@ static void test_clobber_redzone_free(struct kunit *test)
>>         kmem_cache_destroy(s);
>>  }
>>
>> +static void test_kmalloc_redzone_access(struct kunit *test)
>> +{
>> +       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_kmalloc", 32, 0,
>> +                               SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE|DEFAULT_FLAGS,
>> +                               NULL);
>> +       u8 *p = kmalloc_trace(s, GFP_KERNEL, 18);
>> +
>> +       kasan_disable_current();
>> +
>> +       /* Suppress the -Warray-bounds warning */
>> +       OPTIMIZER_HIDE_VAR(p);
>> +       p[18] = 0xab;
>> +       p[19] = 0xab;
>> +
>> +       kmem_cache_free(s, p);
>> +       validate_slab_cache(s);
>> +       KUNIT_EXPECT_EQ(test, 2, slab_errors);
>> +
>> +       kasan_enable_current();
>> +       kmem_cache_destroy(s);
>> +}
>> +
>>  static int test_init(struct kunit *test)
>>  {
>>         slab_errors = 0;
>> @@ -141,6 +163,7 @@ static struct kunit_case test_cases[] = {
>>  #endif
>>
>>         KUNIT_CASE(test_clobber_redzone_free),
>> +       KUNIT_CASE(test_kmalloc_redzone_access),
>>         {}
>>  };
>>
>> diff --git a/mm/slab.h b/mm/slab.h
>> index c71590f3a22b..b6cd98b16ba7 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -327,7 +327,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
>>                          SLAB_CACHE_DMA32 | SLAB_PANIC | \
>> -                        SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
>> +                        SLAB_KMALLOC | SLAB_SKIP_KFENCE | \
>> +                        SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS)
> 
> Shouldn't this hunk be in the previous patch, otherwise that patch
> alone will fail?

Good point.

> This will also make SLAB_SKIP_KFENCE generally available to be used
> for cache creation. This is a significant change, and before it wasn't
> possible. Perhaps add a brief note to the commit message (or have a
> separate patch). We were trying to avoid making this possible, as it
> might be abused - however, given it's required for tests like these, I
> suppose there's no way around it.

For SLAB_SKIP_KFENCE, we could also add the flag after creation to avoid
this trouble? After all there is a sysfs file to control it at runtime
anyway (via skip_kfence_store()).
In that case patch 1 would have to wrap kmem_cache_create() and the flag
addition with a new function to avoid repeating. That function could also be
adding SLAB_NO_USER_FLAGS to kmem_cache_create(), instead of the #define
DEFAULT_FLAGS.

For SLAB_KMALLOC there's probably no such way unless we abuse the internal
APIs even more and call e.g. create_boot_cache() instead of
kmem_cache_create(). But that one is __init, so probably not. If we do
instead allow the flag, I wouldn't add it to SLAB_CORE_FLAGS but rather
SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTED.

> Thanks,
> -- Marco

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ