lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4cnfE4n6b1koeR3@hyeyoo>
Date:   Wed, 30 Nov 2022 18:50:52 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Feng Tang <feng.tang@...el.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>, Marco Elver <elver@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Oliver Glitta <glittao@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/2] mm/slub, kunit: Add a test case for kmalloc
 redzone check

On Wed, Nov 30, 2022 at 04:54:51PM +0800, Feng Tang wrote:
> kmalloc redzone check for slub has been merged, and it's better to add
> a kunit case for it, which is inspired by a real-world case as described
> in commit 120ee599b5bf ("staging: octeon-usb: prevent memory corruption"):
> 
> "
>   octeon-hcd will crash the kernel when SLOB is used. This usually happens
>   after the 18-byte control transfer when a device descriptor is read.
>   The DMA engine is always transferring full 32-bit words and if the
>   transfer is shorter, some random garbage appears after the buffer.
>   The problem is not visible with SLUB since it rounds up the allocations
>   to word boundary, and the extra bytes will go undetected.
> "
> 
> To avoid interrupting the normal functioning of kmalloc caches, a
> kmem_cache mimicing kmalloc cache is created with similar flags, and
> kmalloc_trace() is used to really test the orig_size and redzone setup.
> 
> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Feng Tang <feng.tang@...el.com>
> ---
> Changelog:
> 
>   since v2:
>     * only add SLAB_KMALLOC to SLAB_CACHE_FLAGS and SLAB_FLAGS_PERMITTEDa,
>       and use new wrapper of cache creation(Vlastimil Babka)
> 
>   since v1:
>     * create a new cache mimicing kmalloc cache, reduce dependency
>       over global slub_debug setting (Vlastimil Babka)
> 
>  lib/slub_kunit.c | 22 ++++++++++++++++++++++
>  mm/slab.h        |  4 +++-
>  2 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c
> index 5b0c8e7eb6dc..ff24879e3afe 100644
> --- a/lib/slub_kunit.c
> +++ b/lib/slub_kunit.c
> @@ -135,6 +135,27 @@ static void test_clobber_redzone_free(struct kunit *test)
>  	kmem_cache_destroy(s);
>  }
>  
> +static void test_kmalloc_redzone_access(struct kunit *test)
> +{
> +	struct kmem_cache *s = test_kmem_cache_create("TestSlub_RZ_kmalloc", 32,
> +				SLAB_KMALLOC|SLAB_STORE_USER|SLAB_RED_ZONE);
> +	u8 *p = kmalloc_trace(s, GFP_KERNEL, 18);
> +
> +	kasan_disable_current();
> +
> +	/* Suppress the -Warray-bounds warning */
> +	OPTIMIZER_HIDE_VAR(p);
> +	p[18] = 0xab;
> +	p[19] = 0xab;
> +
> +	kmem_cache_free(s, p);
> +	validate_slab_cache(s);
> +	KUNIT_EXPECT_EQ(test, 2, slab_errors);
> +
> +	kasan_enable_current();
> +	kmem_cache_destroy(s);
> +}
> +
>  static int test_init(struct kunit *test)
>  {
>  	slab_errors = 0;
> @@ -154,6 +175,7 @@ static struct kunit_case test_cases[] = {
>  #endif
>  
>  	KUNIT_CASE(test_clobber_redzone_free),
> +	KUNIT_CASE(test_kmalloc_redzone_access),
>  	{}
>  };
>  
> diff --git a/mm/slab.h b/mm/slab.h
> index c71590f3a22b..7cc432969945 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -344,7 +344,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			  SLAB_ACCOUNT)
>  #elif defined(CONFIG_SLUB)
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
> -			  SLAB_TEMPORARY | SLAB_ACCOUNT | SLAB_NO_USER_FLAGS)
> +			  SLAB_TEMPORARY | SLAB_ACCOUNT | \
> +			  SLAB_NO_USER_FLAGS | SLAB_KMALLOC)
>  #else
>  #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
>  #endif
> @@ -364,6 +365,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
>  			      SLAB_RECLAIM_ACCOUNT | \
>  			      SLAB_TEMPORARY | \
>  			      SLAB_ACCOUNT | \
> +			      SLAB_KMALLOC | \
>  			      SLAB_NO_USER_FLAGS)
>  
>  bool __kmem_cache_empty(struct kmem_cache *);
> -- 
> 2.34.1
> 

Looks good to me.
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>

-- 
Thanks,
Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ