[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4aa71029-8a4a-0c6d-438d-71cebb11ccea@intel.com>
Date: Wed, 15 Feb 2023 19:01:19 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Edward Cree <ecree.xilinx@...il.com>, <davem@...emloft.net>,
<netdev@...r.kernel.org>, <edumazet@...gle.com>,
<pabeni@...hat.com>, <willemb@...gle.com>, <fw@...len.de>
Subject: Re: [PATCH net-next 2/3] net: skbuff: cache one skb_ext for use by
GRO
From: Jakub Kicinski <kuba@...nel.org>
Date: Wed, 15 Feb 2023 09:52:00 -0800
> On Wed, 15 Feb 2023 17:17:53 +0100 Alexander Lobakin wrote:
>>> On 15/02/2023 03:43, Jakub Kicinski wrote:
>>>> On the driver -> GRO path we can avoid thrashing the kmemcache
>>>> by holding onto one skb_ext.
>>>
>>> Hmm, will one be enough if we're doing GRO_NORMAL batching?
>>> As for e.g. UDP traffic up to 8 skbs (by default) can have
>>> overlapping lifetimes.
>>>
>> I thought of an array of %NAPI_SKB_CACHE_SIZE to be honest. From what
>> I've ever tested, no cache (for any netstack-related object) is enough
>> if it can't serve one full NAPI poll :D
>
> I was hoping to leave sizing of the cache until we have some data from
> a production network (or at least representative packet traces).
>
> NAPI_SKB_CACHE_SIZE kinda assumes we're not doing much GRO, right?
It assumes we GRO a lot :D
Imagine that you have 64 frames during one poll and the GRO layer
decides to coalesce them by batches of 16. Then only 4 skbs will be
used, the rest will go as frags (with "stolen heads") -> 60 of 64 skbs
will return to that skb cache and will then be reused by napi_build_skb().
> And the current patch feeds the cache exclusively from GRO...
>
>> + agree with Paolo re napi_reuse_skb(), it's used only in the NAPI
>> context and recycles a lot o'stuff already, we can speed it up safely here.
>
> LMK what's your opinion on touching the other potential spots, too.
> (in Paolo's subthread).
<went to take a look already>
Thanks,
Olek
Powered by blists - more mailing lists