lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 20 Oct 2022 10:42:47 +0200 From: Paolo Abeni <pabeni@...hat.com> To: Kees Cook <keescook@...omium.org>, "David S. Miller" <davem@...emloft.net> Cc: Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Nick Desaulniers <ndesaulniers@...gle.com>, David Rientjes <rientjes@...gle.com>, Vlastimil Babka <vbabka@...e.cz>, Pavel Begunkov <asml.silence@...il.com>, Menglong Dong <imagedong@...cent.com>, linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org Subject: Re: [PATCH v3][next] skbuff: Proactively round up to kmalloc bucket size Hello, On Tue, 2022-10-18 at 02:33 -0700, Kees Cook wrote: > Instead of discovering the kmalloc bucket size _after_ allocation, round > up proactively so the allocation is explicitly made for the full size, > allowing the compiler to correctly reason about the resulting size of > the buffer through the existing __alloc_size() hint. > > This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the > coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain > back the __alloc_size() hints that were temporarily reverted in commit > 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") > > Cc: "David S. Miller" <davem@...emloft.net> > Cc: Eric Dumazet <edumazet@...gle.com> > Cc: Jakub Kicinski <kuba@...nel.org> > Cc: Paolo Abeni <pabeni@...hat.com> > Cc: netdev@...r.kernel.org > Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org> > Cc: Nick Desaulniers <ndesaulniers@...gle.com> > Cc: David Rientjes <rientjes@...gle.com> > Cc: Vlastimil Babka <vbabka@...e.cz> > Signed-off-by: Kees Cook <keescook@...omium.org> > --- > v3: refactor again to pass allocation size more cleanly to callers > v2: https://lore.kernel.org/lkml/20220923202822.2667581-4-keescook@chromium.org/ > --- > net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- > 1 file changed, 22 insertions(+), 19 deletions(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index 1d9719e72f9d..3ea1032d03ec 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -425,11 +425,12 @@ EXPORT_SYMBOL(napi_build_skb); > * memory is free > */ > static void *kmalloc_reserve(size_t size, gfp_t flags, int node, > - bool *pfmemalloc) > + bool *pfmemalloc, size_t *alloc_size) > { > void *obj; > bool ret_pfmemalloc = false; > > + size = kmalloc_size_roundup(size); > /* > * Try a regular allocation, when that fails and we're not entitled > * to the reserves, fail. > @@ -448,6 +449,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, > if (pfmemalloc) > *pfmemalloc = ret_pfmemalloc; > > + *alloc_size = size; > return obj; > } > > @@ -479,7 +481,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, > { > struct kmem_cache *cache; > struct sk_buff *skb; > - unsigned int osize; > + size_t alloc_size; > bool pfmemalloc; > u8 *data; > > @@ -506,15 +508,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, > */ > size = SKB_DATA_ALIGN(size); > size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); > - if (unlikely(!data)) > - goto nodata; I'm sorry for not noticing the above in the previous iteration, but I think this revision will produce worse code than the V1, as kmalloc_reserve() now pollutes an additional register. Why did you prefer adding an additional parameter to kmalloc_reserve()? I think computing the alloc_size in the caller is even more readable. Additionally, as a matter of personal preference, I would not introduce an additional variable for alloc_size, just: // ... size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); The rationale is smaller diff, and consistent style with the existing code where 'size' is already adjusted multiple times icrementally. Cheers, Paolo
Powered by blists - more mailing lists