[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9ab432f-6d3c-aa8e-66bd-a82fac5d1098@gmail.com>
Date: Tue, 24 Nov 2020 20:11:42 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH net-next 1/3] net: remove napi_hash_del() from
driver-facing API
On 11/24/20 7:54 PM, Jakub Kicinski wrote:
> On Tue, 24 Nov 2020 19:00:50 +0100 Eric Dumazet wrote:
>> On 9/9/20 7:37 PM, Jakub Kicinski wrote:
>>> We allow drivers to call napi_hash_del() before calling
>>> netif_napi_del() to batch RCU grace periods. This makes
>>> the API asymmetric and leaks internal implementation details.
>>> Soon we will want the grace period to protect more than just
>>> the NAPI hash table.
>>>
>>> Restructure the API and have drivers call a new function -
>>> __netif_napi_del() if they want to take care of RCU waits.
>>>
>>> Note that only core was checking the return status from
>>> napi_hash_del() so the new helper does not report if the
>>> NAPI was actually deleted.
>>>
>>> Some notes on driver oddness:
>>> - veth observed the grace period before calling netif_napi_del()
>>> but that should not matter
>>> - myri10ge observed normal RCU flavor
>>> - bnx2x and enic did not actually observe the grace period
>>> (unless they did so implicitly)
>>> - virtio_net and enic only unhashed Rx NAPIs
>>>
>>> The last two points seem to indicate that the calls to
>>> napi_hash_del() were a left over rather than an optimization.
>>> Regardless, it's easy enough to correct them.
>>>
>>> This patch may introduce extra synchronize_net() calls for
>>> interfaces which set NAPI_STATE_NO_BUSY_POLL and depend on
>>> free_netdev() to call netif_napi_del(). This seems inevitable
>>> since we want to use RCU for netpoll dev->napi_list traversal,
>>> and almost no drivers set IFF_DISABLE_NETPOLL.
>>>
>>> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
>>
>> After this patch, gro_cells_destroy() became damn slow
>> on hosts with a lot of cores.
>>
>> After your change, we have one additional synchronize_net() per cpu as
>> you stated in your changelog.
>
> Sorry :S I hope it didn't waste too much of your time..
Do not worry ;)
>
>> gro_cells_init() is setting NAPI_STATE_NO_BUSY_POLL, and this was enough
>> to not have one synchronize_net() call per netif_napi_del()
>>
>> I will test something like :
>> I am not yet convinced the synchronize_net() is needed, since these
>> NAPI structs are not involved in busy polling.
>
> IDK how this squares against netpoll, though?
>
Can we actually attach netpoll to a virtual device using gro_cells ?
Powered by blists - more mailing lists