[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<LV2PR21MB33009692C01762F9BCF06D98CE75A@LV2PR21MB3300.namprd21.prod.outlook.com>
Date: Wed, 11 Jun 2025 17:36:51 +0000
From: Long Li <longli@...rosoft.com>
To: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>, Erni Sri Satya Vennela
<ernis@...ux.microsoft.com>
CC: KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
"andrew+netdev@...n.ch" <andrew+netdev@...n.ch>, "davem@...emloft.net"
<davem@...emloft.net>, "edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>, "pabeni@...hat.com" <pabeni@...hat.com>,
Konstantin Taranov <kotaranov@...rosoft.com>, "horms@...nel.org"
<horms@...nel.org>, Shiraz Saleem <shirazsaleem@...rosoft.com>,
"leon@...nel.org" <leon@...nel.org>, "shradhagupta@...ux.microsoft.com"
<shradhagupta@...ux.microsoft.com>, "schakrabarti@...ux.microsoft.com"
<schakrabarti@...ux.microsoft.com>, "rosenp@...il.com" <rosenp@...il.com>,
"sdf@...ichev.me" <sdf@...ichev.me>, "linux-hyperv@...r.kernel.org"
<linux-hyperv@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>
Subject: RE: [PATCH net-next 1/4] net: mana: Fix potential deadlocks in mana
napi ops
> Subject: Re: [PATCH net-next 1/4] net: mana: Fix potential deadlocks in mana
> napi ops
>
> On Wed, Jun 11, 2025 at 04:03:52AM -0700, Saurabh Singh Sengar wrote:
> > On Wed, Jun 11, 2025 at 01:46:13AM -0700, Erni Sri Satya Vennela wrote:
> > > When net_shaper_ops are enabled for MANA, netdev_ops_lock becomes
> > > active.
> > >
> > > The netvsc sets up MANA VF via following call chain:
> > >
> > > netvsc_vf_setup()
> > > dev_change_flags()
> > > ...
> > > __dev_open() OR __dev_close()
> > >
> > > dev_change_flags() holds the netdev mutex via netdev_lock_ops.
> > >
> > > During this process, mana_create_txq() and mana_create_rxq() invoke
> > > netif_napi_add_tx(), netif_napi_add_weight(), and napi_enable(), all
> > > of which attempt to acquire the same lock, leading to a potential
> > > deadlock.
> >
> > commit message could be better oriented.
> >
> > >
> > > Similarly, mana_destroy_txq() and mana_destroy_rxq() call
> > > netif_napi_disable() and netif_napi_del(), which also contend for
> > > the same lock.
> > >
> > > Switch to the _locked variants of these APIs to avoid deadlocks when
> > > the netdev_ops_lock is held.
> > >
> > > Fixes: d4c22ec680c8 ("net: hold netdev instance lock during
> > > ndo_open/ndo_stop")
> > > Signed-off-by: Erni Sri Satya Vennela <ernis@...ux.microsoft.com>
> > > Reviewed-by: Haiyang Zhang <haiyangz@...rosoft.com>
> > > Reviewed-by: Shradha Gupta <shradhagupta@...ux.microsoft.com>
> > > ---
> > > drivers/net/ethernet/microsoft/mana/mana_en.c | 39
> > > ++++++++++++++-----
> > > 1 file changed, 30 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > index ccd2885c939e..3c879d8a39e3 100644
> > > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > @@ -1911,8 +1911,13 @@ static void mana_destroy_txq(struct
> mana_port_context *apc)
> > > napi = &apc->tx_qp[i].tx_cq.napi;
> > > if (apc->tx_qp[i].txq.napi_initialized) {
> > > napi_synchronize(napi);
> > > - napi_disable(napi);
> > > - netif_napi_del(napi);
> > > + if (netdev_need_ops_lock(napi->dev)) {
> > > + napi_disable_locked(napi);
> > > + netif_napi_del_locked(napi);
> > > + } else {
> > > + napi_disable(napi);
> > > + netif_napi_del(napi);
> > > + }
> >
> > Instead of using if-else, we can used netdev_lock_ops(), followed by *_locked
> api-s.
> > Same for rest of the patch.
> >
>
> I later realized that what we actually need is:
>
> if (!netdev_need_ops_lock(napi->dev))
> netdev_lock(dev);
>
> not
>
> if (netdev_need_ops_lock(napi->dev))
> netdev_lock(dev);
>
> Hence, netdev_lock_ops() is not appropriate. Instead, netdev_lock_ops_to_full()
> seems to be a better choice.
Yes, netdev_lock_ops_to_full() seems better.
Long
Powered by blists - more mailing lists