[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Apr 2023 18:50:30 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Eric Dumazet <edumazet@...gle.com>
CC: Jakub Kicinski <kuba@...nel.org>, <davem@...emloft.net>,
<netdev@...r.kernel.org>, <pabeni@...hat.com>, <hawk@...nel.org>,
<ilias.apalodimas@...aro.org>
Subject: Re: [RFC net-next 1/2] page_pool: allow caching from safely localized
NAPI
On 2023/4/4 12:21, Eric Dumazet wrote:
> On Tue, Apr 4, 2023 at 2:53 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>
>> Interesting.
>> I wonder if we can make this more generic by adding the skb to per napi
>> list instead of sd->defer_list, so that we can always use NAPI kicking to
>> flush skb as net_tx_action() done for sd->completion_queue instead of
>> softirq kicking?
>
> We do not have direct skb -> napi association yet, but using an
> expensive hash lookup.
>
> I had the intent of adding per-cpu caches in this infrastructure,
> to not acquire the remote-cpu defer_lock for one skb at a time.
> (This is I think causing some regressions for small packets, with no frags)
Is there any reason not to introduce back the per socket defer_list to
not acquire the defer_lock for one skb at a time instead of adding
per-cpu caches?
>
>>
>> And it seems we know which napi binds to a specific socket through
>> busypoll mechanism, we can reuse that to release a skb to the napi
>> bound to that socket?
>
> busypoll is not often used, and we usually burn (spinning) cycles there,
> not sure we want to optimize it?
How about only optimize napi_by_id()?
To be honest, I am not sure how to optimize it exactly, maybe add a callback
to notify the deletion of napi to the socket?
>
>>
>>>
>>> The main case we'll miss out on is when application runs on the same
>>> CPU as NAPI. In that case we don't use the deferred skb free path.
>>> We could disable softirq one that path, too... maybe?
>>>
> .
>
Powered by blists - more mailing lists