lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNM_sKe0D64y+hsX0gYa8d9aCRVMBZjCvgjKcHPeYsjjBQ@mail.gmail.com>
Date:   Wed, 30 Nov 2022 10:06:45 +0100
From:   Marco Elver <elver@...gle.com>
To:     Feng Tang <feng.tang@...el.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oliver Glitta <glittao@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] mm/slub, kunit: add SLAB_SKIP_KFENCE flag for
 cache creation

On Wed, 30 Nov 2022 at 09:57, Feng Tang <feng.tang@...el.com> wrote:
>
> When kfence is enabled, the buffer allocated from the test case
> could be from a kfence pool, and the operation could be also
> caught and reported by kfence first, causing the case to fail.
>
> With default kfence setting, this is very difficult to be triggered.
> By changing CONFIG_KFENCE_NUM_OBJECTS from 255 to 16383, and
> CONFIG_KFENCE_SAMPLE_INTERVAL from 100 to 5, the allocation from
> kfence did hit 7 times in different slub_kunit cases out of 900
> times of boot test.
>
> To avoid this, initially we tried is_kfence_address() to check this
> and repeated allocation till finding a non-kfence address. Vlastimil
> Babka suggested SLAB_SKIP_KFENCE flag could be used to achieve this,
> and better add a wrapper function for simplifying cache creation.
>
> Signed-off-by: Feng Tang <feng.tang@...el.com>

Reviewed-by: Marco Elver <elver@...gle.com>

> ---
> Changelog:
>
>   since v2:
>     * Don't make SKIP_KFENCE an allowd flag for cache creation, and
>       solve a bug of failed cache creation issue (Marco Elver)
>     * Add a wrapper cache creation function to simplify code
>      including SKIP_KFENCE handling (Vlastimil Babka)
>
>  lib/slub_kunit.c | 35 +++++++++++++++++++++++++----------
>  1 file changed, 25 insertions(+), 10 deletions(-)
>
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 7a0564d7cb7a..5b0c8e7eb6dc 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -9,10 +9,25 @@
>  static struct kunit_resource resource;
>  static int slab_errors;
>
> +/*
> + * Wrapper function for kmem_cache_create(), which reduces 2 parameters:
> + * 'align' and 'ctor', and sets SLAB_SKIP_KFENCE flag to avoid getting an
> + * object from kfence pool, where the operation could be caught by both
> + * our test and kfence sanity check.
> + */
> +static struct kmem_cache *test_kmem_cache_create(const char *name,
> +                               unsigned int size, slab_flags_t flags)
> +{
> +       struct kmem_cache *s = kmem_cache_create(name, size, 0,
> +                                       (flags | SLAB_NO_USER_FLAGS), NULL);
> +       s->flags |= SLAB_SKIP_KFENCE;
> +       return s;
> +}
> +
>  static void test_clobber_zone(struct kunit *test)
>  {
> -       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0,
> -                               SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
> +       struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_alloc", 64,
> +                                                       SLAB_RED_ZONE);
>         u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>
>         kasan_disable_current();
> @@ -29,8 +44,8 @@ static void test_clobber_zone(struct kunit *test)
>  #ifndef CONFIG_KASAN
>  static void test_next_pointer(struct kunit *test)
>  {
> -       struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0,
> -                               SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
> +       struct kmem_cache *s = test_kmem_cache_create("TestSlub_next_ptr_free",
> +                                                       64, SLAB_POISON);
>         u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>         unsigned long tmp;
>         unsigned long *ptr_addr;
> @@ -74,8 +89,8 @@ static void test_next_pointer(struct kunit *test)
>
>  static void test_first_word(struct kunit *test)
>  {
> -       struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0,
> -                               SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
> +       struct kmem_cache *s = test_kmem_cache_create("TestSlub_1th_word_free",
> +                                                       64, SLAB_POISON);
>         u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>
>         kmem_cache_free(s, p);
> @@ -89,8 +104,8 @@ static void test_first_word(struct kunit *test)
>
>  static void test_clobber_50th_byte(struct kunit *test)
>  {
> -       struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0,
> -                               SLAB_POISON|SLAB_NO_USER_FLAGS, NULL);
> +       struct kmem_cache *s = test_kmem_cache_create("TestSlub_50th_word_free",
> +                                                       64, SLAB_POISON);
>         u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>
>         kmem_cache_free(s, p);
> @@ -105,8 +120,8 @@ static void test_clobber_50th_byte(struct kunit *test)
>
>  static void test_clobber_redzone_free(struct kunit *test)
>  {
> -       struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0,
> -                               SLAB_RED_ZONE|SLAB_NO_USER_FLAGS, NULL);
> +       struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_free", 64,
> +                                                       SLAB_RED_ZONE);
>         u8 *p = kmem_cache_alloc(s, GFP_KERNEL);
>
>         kasan_disable_current();
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ