[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f8d72aa1-3f64-b1a1-b776-f8c181f09ca4@suse.cz>
Date: Mon, 24 Oct 2022 19:56:05 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Kees Cook <keescook@...omium.org>,
"David S. Miller" <davem@...emloft.net>
Cc: Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Pavel Begunkov <asml.silence@...il.com>,
Menglong Dong <imagedong@...cent.com>,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH v4] skbuff: Proactively round up to kmalloc bucket size
On 10/22/22 01:49, Kees Cook wrote:
> Instead of discovering the kmalloc bucket size _after_ allocation, round
> up proactively so the allocation is explicitly made for the full size,
> allowing the compiler to correctly reason about the resulting size of
> the buffer through the existing __alloc_size() hint.
>
> This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the
> coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain
> back the __alloc_size() hints that were temporarily reverted in commit
> 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller")
>
> Cc: "David S. Miller" <davem@...emloft.net>
> Cc: Eric Dumazet <edumazet@...gle.com>
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: Paolo Abeni <pabeni@...hat.com>
> Cc: netdev@...r.kernel.org
> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
> Cc: Nick Desaulniers <ndesaulniers@...gle.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Kees Cook <keescook@...omium.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Nit below:
> ---
> v4: use kmalloc_size_roundup() in callers, not kmalloc_reserve()
> v3: https://lore.kernel.org/lkml/20221018093005.give.246-kees@kernel.org
> v2: https://lore.kernel.org/lkml/20220923202822.2667581-4-keescook@chromium.org
> ---
> net/core/skbuff.c | 50 +++++++++++++++++++++++------------------------
> 1 file changed, 25 insertions(+), 25 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 651a82d30b09..77af430296e2 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -508,14 +508,14 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> */
> size = SKB_DATA_ALIGN(size);
> size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc);
> + osize = kmalloc_size_roundup(size);
> + data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc);
> if (unlikely(!data))
> goto nodata;
> /* kmalloc(size) might give us more room than requested.
The line above should now say kmalloc_size_roundup(size), or maybe could be
deleted completely now?
> * Put skb_shared_info exactly at the end of allocated zone,
> * to allow max possible filling before reallocation.
> */
> - osize = ksize(data);
> size = SKB_WITH_OVERHEAD(osize);
> prefetchw(data + size);
>
Powered by blists - more mailing lists