lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKBYdc6r5fYi-tCqgjD99T=YXcrUiuuPQA9K1nXbtGnBA@mail.gmail.com>
Date: Thu, 16 Oct 2025 06:29:24 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next] net: shrink napi_skb_cache_put()

On Thu, Oct 16, 2025 at 5:56 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Thu, Oct 16, 2025 at 4:08 AM Alexander Lobakin
> <aleksander.lobakin@...el.com> wrote:
> >
> > From: Eric Dumazet <edumazet@...gle.com>
> >
> > BTW doesn't napi_skb_cache_get() (inc. get_bulk()) suffer the same way?
>
> Probably, like other calls to napi_skb_cache_put(()
>
> No loop there, so I guess there is no big deal.
>
> I was looking at napi_skb_cache_put() because there is a lack of NUMA awareness,
> and was curious to experiment with some strategies there.

If we cache kmem_cache_size() in net_hotdata, the compiler is able to
eliminate dead code
for CONFIG_KASAN=n

Maybe this looks better ?

diff --git a/include/net/hotdata.h b/include/net/hotdata.h
index 1aca9db99320f942b06b7d412d428a3045e87e60..f643e6a4647cc5e694a7044797f01a1107db46a9
100644
--- a/include/net/hotdata.h
+++ b/include/net/hotdata.h
@@ -33,9 +33,10 @@ struct net_hotdata {
        struct kmem_cache       *skbuff_cache;
        struct kmem_cache       *skbuff_fclone_cache;
        struct kmem_cache       *skb_small_head_cache;
+       u32                     skbuff_cache_size;
 #ifdef CONFIG_RPS
-       struct rps_sock_flow_table __rcu *rps_sock_flow_table;
        u32                     rps_cpu_mask;
+       struct rps_sock_flow_table __rcu *rps_sock_flow_table;
 #endif
        struct skb_defer_node __percpu *skb_defer_nodes;
        int                     gro_normal_batch;
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index c9e33f26852b63e930e33a406c19cc02f1821746..62b1acca55c7fd3e1fb7614cb0c625206db0ab3f
100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -365,7 +365,7 @@ static struct sk_buff *napi_skb_cache_get(void)

        skb = nc->skb_cache[--nc->skb_count];
        local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
-       kasan_mempool_unpoison_object(skb,
kmem_cache_size(net_hotdata.skbuff_cache));
+       kasan_mempool_unpoison_object(skb, net_hotdata.skbuff_cache_size);

        return skb;
 }
@@ -1504,7 +1504,7 @@ static void napi_skb_cache_put(struct sk_buff *skb)
        if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
                for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
                        kasan_mempool_unpoison_object(nc->skb_cache[i],
-
kmem_cache_size(net_hotdata.skbuff_cache));
+                                               net_hotdata.skbuff_cache_size);

                kmem_cache_free_bulk(net_hotdata.skbuff_cache,
NAPI_SKB_CACHE_HALF,
                                     nc->skb_cache + NAPI_SKB_CACHE_HALF);
@@ -5164,6 +5164,7 @@ void __init skb_init(void)
                                              offsetof(struct sk_buff, cb),
                                              sizeof_field(struct sk_buff, cb),
                                              NULL);
+       net_hotdata.skbuff_cache_size =
kmem_cache_size(net_hotdata.skbuff_cache);
        net_hotdata.skbuff_fclone_cache =
kmem_cache_create("skbuff_fclone_cache",
                                                sizeof(struct sk_buff_fclones),
                                                0,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ