lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Sep 2021 02:47:13 +0900
From:   Kangmin Park <l4stpr0gr4m@...il.com>
To:     "David S. Miller" <davem@...emloft.net>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        Alexander Lobakin <alobakin@...me>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Guillaume Nault <gnault@...hat.com>,
        Vasily Averin <vvs@...tuozzo.com>,
        Cong Wang <cong.wang@...edance.com>, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: [RFC PATCH net-next] Introducing lockless cache built on top of slab allocator

It is just introducing and proof of concept.
The patch code is based on other RFC patches. So, the code is not
correct yet, it is just simple proof of concept.

Recently block layer implemented percpu, lockless cache on the top
of slab allocator. It can be used for IO polling.

Link: https://lwn.net/Articles/868070/
Link: https://www.spinics.net/lists/linux-block/msg71964.html

It gained some IOPS increase (performance increased by about 10%
on the block layer).

And there are attempts to implement the percpu, lockless cache.

Link: https://lore.kernel.org/linux-mm/20210920154816.31832-1-42.hyeyoo@gmail.com/T/#u

If this cache is implemented successfully,
how about use this cache to allocate skb instead of kmem_cache_alloc_bulk()
in napi_skb_cache_get()?

I want your comment/opinion.

Signed-off-by: Kangmin Park <l4stpr0gr4m@...il.com>
---
 net/core/skbuff.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7c2ab27fcbf9..f9a9deca423d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -170,11 +170,15 @@ static struct sk_buff *napi_skb_cache_get(void)
 	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
 	struct sk_buff *skb;
 
-	if (unlikely(!nc->skb_count))
-		nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
-						      GFP_ATOMIC,
-						      NAPI_SKB_CACHE_BULK,
-						      nc->skb_cache);
+	if (unlikely(!nc->skb_count)) {
+		/* kmem_cache_alloc_cached should be changed to return the size of
+		 * the allocated cache
+		 */
+		nc->skb_cache = kmem_cache_alloc_cached(skbuff_head_cache,
+							GFP_ATOMIC | SLB_LOCKLESS_CACHE);
+		nc->skb_count = this_cpu_ptr(skbuff_head_cache)->size;
+	}
+
 	if (unlikely(!nc->skb_count))
 		return NULL;
 
-- 
2.26.2

Powered by blists - more mailing lists