[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210111182801.12609-5-alobakin@pm.me>
Date: Mon, 11 Jan 2021 18:29:44 +0000
From: Alexander Lobakin <alobakin@...me>
To: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
Cc: Eric Dumazet <edumazet@...gle.com>,
Edward Cree <ecree@...arflare.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Willem de Bruijn <willemb@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Alexander Lobakin <alobakin@...me>,
Steffen Klassert <steffen.klassert@...unet.com>,
Guillaume Nault <gnault@...hat.com>,
Yadu Kishore <kyk.segfault@...il.com>,
Al Viro <viro@...iv.linux.org.uk>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH net-next 5/5] skbuff: refill skb_cache early from deferred-to-consume entries
Instead of unconditional queueing of ready-to-consume skbuff_heads
to flush_skb_cache, feed skb_cache with them instead if it's not
full already.
This greatly reduces the frequency of kmem_cache_alloc_bulk() calls.
Signed-off-by: Alexander Lobakin <alobakin@...me>
---
net/core/skbuff.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 57a7307689f3..ba0d5611635e 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -904,6 +904,11 @@ static inline void _kfree_skb_defer(struct sk_buff *skb)
/* drop skb->head and call any destructors for packet */
skb_release_all(skb);
+ if (nc->skb_count < NAPI_SKB_CACHE_SIZE) {
+ nc->skb_cache[nc->skb_count++] = skb;
+ return;
+ }
+
/* record skb to CPU local list */
nc->flush_skb_cache[nc->flush_skb_count++] = skb;
--
2.30.0
Powered by blists - more mailing lists