[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48edb201-3c6f-4a94-92dc-bd0d8c0a55b5@intel.com>
Date: Thu, 16 Oct 2025 14:38:44 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller"
<davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>
CC: Simon Horman <horms@...nel.org>, Kuniyuki Iwashima <kuniyu@...gle.com>,
<netdev@...r.kernel.org>, <eric.dumazet@...il.com>, Alexander Lobakin
<aleksander.lobakin@...el.com>
Subject: Re: [PATCH v2 net-next] net: shrink napi_skb_cache_{put,get}() and
napi_skb_cache_get_bulk()
On 10/16/2025 11:29 AM, Eric Dumazet wrote:
> Following loop in napi_skb_cache_put() is unrolled by the compiler
> even if CONFIG_KASAN is not enabled:
>
> for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
> kasan_mempool_unpoison_object(nc->skb_cache[i],
> kmem_cache_size(net_hotdata.skbuff_cache));
>
> We have 32 times this sequence, for a total of 384 bytes.
>
> 48 8b 3d 00 00 00 00 net_hotdata.skbuff_cache,%rdi
> e8 00 00 00 00 call kmem_cache_size
>
> This is because kmem_cache_size() is not an inline and not const,
> and kasan_unpoison_object_data() is an inline function.
>
> Cache kmem_cache_size() result in a variable, so that
> the compiler can remove dead code (and variable) when/if
> CONFIG_KASAN is unset.
>
> After this patch, napi_skb_cache_put() is inlined in its callers,
> and we avoid one kmem_cache_size() call in napi_skb_cache_get()
> and napi_skb_cache_get_bulk().
Looks like a reasonable way to fix this to me.
Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Alexander Lobakin <aleksander.lobakin@...el.com>
> ---
> net/core/skbuff.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index bc12790017b0b5c0be99f8fb9d362b3730fa4eb0..143a2ddf0d56ed8037bd46bddc1d7aeac296085c 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -274,6 +274,11 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
> }
> EXPORT_SYMBOL(__netdev_alloc_frag_align);
>
> +/* Cache kmem_cache_size(net_hotdata.skbuff_cache) to help the compiler
> + * remove dead code (and skbuff_cache_size) when CONFIG_KASAN is unset.
> + */
> +static u32 skbuff_cache_size __read_mostly;
> +
> static struct sk_buff *napi_skb_cache_get(void)
> {
> struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
> @@ -293,7 +298,7 @@ static struct sk_buff *napi_skb_cache_get(void)
>
> skb = nc->skb_cache[--nc->skb_count];
> local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
> - kasan_mempool_unpoison_object(skb, kmem_cache_size(net_hotdata.skbuff_cache));
> + kasan_mempool_unpoison_object(skb, skbuff_cache_size);
>
> return skb;
> }
> @@ -345,11 +350,9 @@ u32 napi_skb_cache_get_bulk(void **skbs, u32 n)
>
> get:
> for (u32 base = nc->skb_count - n, i = 0; i < n; i++) {
> - u32 cache_size = kmem_cache_size(net_hotdata.skbuff_cache);
> -
> skbs[i] = nc->skb_cache[base + i];
>
> - kasan_mempool_unpoison_object(skbs[i], cache_size);
> + kasan_mempool_unpoison_object(skbs[i], skbuff_cache_size);
This look already looked up cache_size separately and then call this. I
guess that would be another way to avoid this. However, using the global
__read_mostly makes sense. It is initialized once instead of every call,
so its cheaper.
> memset(skbs[i], 0, offsetof(struct sk_buff, tail));
> }
>
> @@ -1428,7 +1431,7 @@ static void napi_skb_cache_put(struct sk_buff *skb)
> if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
> for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
> kasan_mempool_unpoison_object(nc->skb_cache[i],
> - kmem_cache_size(net_hotdata.skbuff_cache));
> + skbuff_cache_size);
Previously, this inlines to a bunch of calls that check
kasan_enabled().. but the compiler can't reason about it because
kmem_cache_size could have side effects. Now, it sees skbuff_cache_size.
Even though that variable isn't constant, it can still realize that
kasan_enabled() is false, so it properly elides the entire block.
Makes sense.
>
> kmem_cache_free_bulk(net_hotdata.skbuff_cache, NAPI_SKB_CACHE_HALF,
> nc->skb_cache + NAPI_SKB_CACHE_HALF);
> @@ -5116,6 +5119,8 @@ void __init skb_init(void)
> offsetof(struct sk_buff, cb),
> sizeof_field(struct sk_buff, cb),
> NULL);
> + skbuff_cache_size = kmem_cache_size(net_hotdata.skbuff_cache);
> +
> net_hotdata.skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
> sizeof(struct sk_buff_fclones),
> 0,
Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (237 bytes)
Powered by blists - more mailing lists