[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a4392c6a-c054-45f2-866f-f2081e55c149@intel.com>
Date: Tue, 18 Mar 2025 18:10:39 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Eric Dumazet <edumazet@...gle.com>
CC: <intel-wired-lan@...ts.osuosl.org>, Michal Kubiak
<michal.kubiak@...el.com>, Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>, Przemek Kitszel
<przemyslaw.kitszel@...el.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, "Paolo
Abeni" <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>, "Daniel
Borkmann" <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Simon Horman <horms@...nel.org>,
<bpf@...r.kernel.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 07/16] idpf: link NAPIs to queues
From: Alexander Lobakin <aleksander.lobakin@...el.com>
Date: Wed, 12 Mar 2025 18:16:11 +0100
> From: Eric Dumazet <edumazet@...gle.com>
> Date: Fri, 7 Mar 2025 11:28:36 +0100
>
>> On Wed, Mar 5, 2025 at 5:22 PM Alexander Lobakin
>> <aleksander.lobakin@...el.com> wrote:
>>>
>>> Add the missing linking of NAPIs to netdev queues when enabling
>>> interrupt vectors in order to support NAPI configuration and
>>> interfaces requiring get_rx_queue()->napi to be set (like XSk
>>> busy polling).
>>>
>>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
>>> ---
>>> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 30 +++++++++++++++++++++
>>> 1 file changed, 30 insertions(+)
>>>
>>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> index 2f221c0abad8..a3f6e8cff7a0 100644
>>> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> @@ -3560,8 +3560,11 @@ void idpf_vport_intr_rel(struct idpf_vport *vport)
>>> static void idpf_vport_intr_rel_irq(struct idpf_vport *vport)
>>> {
>>> struct idpf_adapter *adapter = vport->adapter;
>>> + bool unlock;
>>> int vector;
>>>
>>> + unlock = rtnl_trylock();
>>
>> This is probably not what you want here ?
>>
>> If another thread is holding RTNL, then rtnl_ttrylock() will not add
>> any protection.
>
> Yep I know. trylock() is because this function can be called in two
> scenarios:
>
> 1) .ndo_close(), when RTNL is already locked;
> 2) "soft reset" aka "stop the traffic, reallocate the queues, start the
> traffic", when RTNL is not taken.
>
> The second one spits a WARN without the RTNL being locked. So this
> trylock() will do nothing for the first scenario and will take the lock
> for the second one.
>
> If that is not correct, let me know, I'll do it a different way (maybe
> it's better to unconditionally take the lock on the callsite for the
> second case?).
Ping. What should I do, lock RTNL on the callsite or proceed with trylock?
Thanks,
Olek
Powered by blists - more mailing lists