[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73ead084-2761-4106-8149-36301d0b0ea0@intel.com>
Date: Thu, 16 Oct 2025 17:24:18 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Eric Dumazet <edumazet@...gle.com>
CC: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski
<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, Kuniyuki Iwashima <kuniyu@...gle.com>,
<netdev@...r.kernel.org>, <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next] net: shrink napi_skb_cache_put()
From: Eric Dumazet <edumazet@...gle.com>
Date: Thu, 16 Oct 2025 06:36:55 -0700
> On Thu, Oct 16, 2025 at 6:29 AM Eric Dumazet <edumazet@...gle.com> wrote:
>>
>> On Thu, Oct 16, 2025 at 5:56 AM Eric Dumazet <edumazet@...gle.com> wrote:
>>>
>>> On Thu, Oct 16, 2025 at 4:08 AM Alexander Lobakin
>>> <aleksander.lobakin@...el.com> wrote:
>>>>
>>>> From: Eric Dumazet <edumazet@...gle.com>
>>>>
>>>> BTW doesn't napi_skb_cache_get() (inc. get_bulk()) suffer the same way?
>>>
>>> Probably, like other calls to napi_skb_cache_put(()
>>>
>>> No loop there, so I guess there is no big deal.
>>>
>>> I was looking at napi_skb_cache_put() because there is a lack of NUMA awareness,
>>> and was curious to experiment with some strategies there.
>>
>> If we cache kmem_cache_size() in net_hotdata, the compiler is able to
>> eliminate dead code
>> for CONFIG_KASAN=n
>>
>> Maybe this looks better ?
>
> No need to put this in net_hotdata, I was distracted by a 4byte hole
> there, we can keep this hole for something hot later.
Yeah this looks good! It's not "hot" anyway, so let it lay freestanding.
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index bc12790017b0b5c0be99f8fb9d362b3730fa4eb0..f3b9356bebc06548a055355c5d1eb04c480f813f
> 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -274,6 +274,8 @@ void *__netdev_alloc_frag_align(unsigned int
> fragsz, unsigned int align_mask)
> }
> EXPORT_SYMBOL(__netdev_alloc_frag_align);
>
> +u32 skbuff_cache_size __read_mostly;
...but probably `static`?
> +
> static struct sk_buff *napi_skb_cache_get(void)
> {
> struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
> @@ -293,7 +295,7 @@ static struct sk_buff *napi_skb_cache_get(void)
>
> skb = nc->skb_cache[--nc->skb_count];
> local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
> - kasan_mempool_unpoison_object(skb,
> kmem_cache_size(net_hotdata.skbuff_cache));
> + kasan_mempool_unpoison_object(skb, skbuff_cache_size);
>
> return skb;
> }
> @@ -1428,7 +1430,7 @@ static void napi_skb_cache_put(struct sk_buff *skb)
> if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
> for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
> kasan_mempool_unpoison_object(nc->skb_cache[i],
> -
> kmem_cache_size(net_hotdata.skbuff_cache));
> + skbuff_cache_size);
>
> kmem_cache_free_bulk(net_hotdata.skbuff_cache,
> NAPI_SKB_CACHE_HALF,
> nc->skb_cache + NAPI_SKB_CACHE_HALF);
> @@ -5116,6 +5118,8 @@ void __init skb_init(void)
> offsetof(struct sk_buff, cb),
> sizeof_field(struct sk_buff, cb),
> NULL);
> + skbuff_cache_size = kmem_cache_size(net_hotdata.skbuff_cache);
> +
> net_hotdata.skbuff_fclone_cache =
> kmem_cache_create("skbuff_fclone_cache",
> sizeof(struct sk_buff_fclones),
> 0,
Thanks,
Olek
Powered by blists - more mailing lists