lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2022 11:20:34 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Vlastimil Babka <vbabka@...e.cz>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>,
        "Dmitry Vyukov" <dvyukov@...gle.com>,
        Andrey Konovalov <andreyknvl@...il.com>,
        "Kees Cook" <keescook@...omium.org>,
        Dave Hansen <dave.hansen@...el.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, <kasan-dev@...glegroups.com>
Subject: Re: [PATCH v7 1/3] mm/slub: only zero requested size of buffer for
 kzalloc when debug enabled

On Wed, Nov 09, 2022 at 03:28:19PM +0100, Vlastimil Babka wrote:
> On 10/21/22 05:24, Feng Tang wrote:
> > kzalloc/kmalloc will round up the request size to a fixed size
> > (mostly power of 2), so the allocated memory could be more than
> > requested. Currently kzalloc family APIs will zero all the
> > allocated memory.
> > 
> > To detect out-of-bound usage of the extra allocated memory, only
> > zero the requested part, so that redzone sanity check could be
> > added to the extra space later.
> > 
> > For kzalloc users who will call ksize() later and utilize this
> > extra space, please be aware that the space is not zeroed any
> > more when debug is enabled. (Thanks to Kees Cook's effort to
> > sanitize all ksize() user cases [1], this won't be a big issue).
> > 
> > [1]. https://lore.kernel.org/all/20220922031013.2150682-1-keescook@chromium.org/#r
> > Signed-off-by: Feng Tang <feng.tang@...el.com>
> > ---
[...]
> >  static inline void slab_post_alloc_hook(struct kmem_cache *s,
> >  					struct obj_cgroup *objcg, gfp_t flags,
> > -					size_t size, void **p, bool init)
> > +					size_t size, void **p, bool init,
> > +					unsigned int orig_size)
> >  {
> > +	unsigned int zero_size = s->object_size;
> >  	size_t i;
> >  
> >  	flags &= gfp_allowed_mask;
> >  
> > +	/*
> > +	 * For kmalloc object, the allocated memory size(object_size) is likely
> > +	 * larger than the requested size(orig_size). If redzone check is
> > +	 * enabled for the extra space, don't zero it, as it will be redzoned
> > +	 * soon. The redzone operation for this extra space could be seen as a
> > +	 * replacement of current poisoning under certain debug option, and
> > +	 * won't break other sanity checks.
> > +	 */
> > +	if (kmem_cache_debug_flags(s, SLAB_STORE_USER) &&
> 
> Shouldn't we check SLAB_RED_ZONE instead? Otherwise a debugging could be
> specified so that SLAB_RED_ZONE is set but SLAB_STORE_USER?

Thanks for the catch!

I will add check for SLAB_RED_ZONE. The SLAB_STORE_USER is for
checking whether 'orig_size' field exists. In earlier discussion,
we make 'orig_size' depend on STORE_USER, https://lore.kernel.org/lkml/1b0fa66c-f855-1c00-e024-b2b823b18678@suse.cz/ 

> > +	    (s->flags & SLAB_KMALLOC))
> > +		zero_size = orig_size;
> > +
> >  	/*
> >  	 * As memory initialization might be integrated into KASAN,
> >  	 * kasan_slab_alloc and initialization memset must be
> > @@ -736,7 +750,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
> >  	for (i = 0; i < size; i++) {
> >  		p[i] = kasan_slab_alloc(s, p[i], flags, init);
> >  		if (p[i] && init && !kasan_has_integrated_init())
> > -			memset(p[i], 0, s->object_size);
> > +			memset(p[i], 0, zero_size);
> >  		kmemleak_alloc_recursive(p[i], s->object_size, 1,
> >  					 s->flags, flags);
> >  		kmsan_slab_alloc(s, p[i], flags);
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 12354fb8d6e4..17292c2d3eee 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3395,7 +3395,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
> >  	init = slab_want_init_on_alloc(gfpflags, s);
> >  
> >  out:
> > -	slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init);
> > +	/*
> > +	 * When init equals 'true', like for kzalloc() family, only
> > +	 * @orig_size bytes will be zeroed instead of s->object_size
> 
> s/will be/might be/ because it depends on the debugging?

Yes, will change.

Thanks,
Feng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ