lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 03 Feb 2023 08:59:31 +0100 From: Paolo Abeni <pabeni@...hat.com> To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org> Cc: netdev@...r.kernel.org, eric.dumazet@...il.com, Alexander Duyck <alexanderduyck@...com>, Soheil Hassas Yeganeh <soheil@...gle.com> Subject: Re: [PATCH net-next 4/4] net: add dedicated kmem_cache for typical/small skb->head On Thu, 2023-02-02 at 18:58 +0000, Eric Dumazet wrote: > Note: after Kees Cook patches and this one, we might > be able to revert commit > dbae2b062824 ("net: skb: introduce and use a single page frag cache") > because GRO_MAX_HEAD is also small. I guess I'll need some time to do the relevant benchmarks, but I'm not able to schedule them very soon. > @@ -486,6 +499,21 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node, > void *obj; > > obj_size = SKB_HEAD_ALIGN(*size); > + if (obj_size <= SKB_SMALL_HEAD_CACHE_SIZE && > + !(flags & KMALLOC_NOT_NORMAL_BITS)) { > + > + /* skb_small_head_cache has non power of two size, > + * likely forcing SLUB to use order-3 pages. > + * We deliberately attempt a NOMEMALLOC allocation only. > + */ > + obj = kmem_cache_alloc_node(skb_small_head_cache, > + flags | __GFP_NOMEMALLOC | __GFP_NOWARN, > + node); > + if (obj) { > + *size = SKB_SMALL_HEAD_CACHE_SIZE; > + goto out; > + } In case kmem allocation failure, should we try to skip the 2nd __GFP_NOMEMALLOC attempt below? I *think* non power of two size is also required to avoid an issue plain (no GFP_DMA nor __GFP_ACCOUNT) allocations in case of fallback to kmalloc(), to prevent skb_kfree_head() mis-interpreting skb->head as kmem_cache allocated. Thanks! Paolo
Powered by blists - more mailing lists