lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 11 Nov 2022 09:12:19 +0100 From: Vlastimil Babka <vbabka@...e.cz> To: Feng Tang <feng.tang@...el.com> Cc: Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>, Kees Cook <keescook@...omium.org>, Dave Hansen <dave.hansen@...el.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com Subject: Re: [PATCH v7 3/3] mm/slub: extend redzone check to extra allocated kmalloc space than requested On 11/11/22 07:46, Feng Tang wrote: > On Thu, Nov 10, 2022 at 04:48:35PM +0100, Vlastimil Babka wrote: >> On 10/21/22 05:24, Feng Tang wrote: >> > kmalloc will round up the request size to a fixed size (mostly power >> > of 2), so there could be a extra space than what is requested, whose >> > size is the actual buffer size minus original request size. >> > >> > To better detect out of bound access or abuse of this space, add >> > redzone sanity check for it. >> > >> > In current kernel, some kmalloc user already knows the existence of >> > the space and utilizes it after calling 'ksize()' to know the real >> > size of the allocated buffer. So we skip the sanity check for objects >> > which have been called with ksize(), as treating them as legitimate >> > users. >> >> Hm so once Kees's effort is finished and all ksize() users behave correctly, >> we can drop all that skip_orig_size_check() code, right? > > Yes, will update the commit log. > >> > In some cases, the free pointer could be saved inside the latter >> > part of object data area, which may overlap the redzone part(for >> > small sizes of kmalloc objects). As suggested by Hyeonggon Yoo, >> > force the free pointer to be in meta data area when kmalloc redzone >> > debug is enabled, to make all kmalloc objects covered by redzone >> > check. >> > >> > Suggested-by: Vlastimil Babka <vbabka@...e.cz> >> > Signed-off-by: Feng Tang <feng.tang@...el.com> >> > Acked-by: Hyeonggon Yoo <42.hyeyoo@...il.com> >> >> Looks fine, but a suggestion below: >> > [...] >> > @@ -966,13 +982,27 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, >> > static void init_object(struct kmem_cache *s, void *object, u8 val) >> > { >> > u8 *p = kasan_reset_tag(object); >> > + unsigned int orig_size = s->object_size; >> > >> > - if (s->flags & SLAB_RED_ZONE) >> > + if (s->flags & SLAB_RED_ZONE) { >> > memset(p - s->red_left_pad, val, s->red_left_pad); >> > >> > + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { >> > + orig_size = get_orig_size(s, object); >> > + >> > + /* >> > + * Redzone the extra allocated space by kmalloc >> > + * than requested. >> > + */ >> > + if (orig_size < s->object_size) >> > + memset(p + orig_size, val, >> > + s->object_size - orig_size); >> >> Wondering if we can remove this if - memset and instead below: >> >> > + } >> > + } >> > + >> > if (s->flags & __OBJECT_POISON) { >> > - memset(p, POISON_FREE, s->object_size - 1); >> > - p[s->object_size - 1] = POISON_END; >> > + memset(p, POISON_FREE, orig_size - 1); >> > + p[orig_size - 1] = POISON_END; >> > } >> > >> > if (s->flags & SLAB_RED_ZONE) >> >> This continues by: >> memset(p + s->object_size, val, s->inuse - s->object_size); >> Instead we could do this, no? >> memset(p + orig_size, val, s->inuse - orig_size); > > Yep, the code is much simpler and cleaner! thanks > > I also change the name from 'orig_size' to 'poison_size', as below: > > Thanks, > Feng Thanks! Now merged all to slab/for-6.2/kmalloc_redzone and for-next
Powered by blists - more mailing lists