[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sf26l3go.fsf@nvidia.com>
Date: Mon, 05 Feb 2024 18:38:41 -0800
From: Rahul Rameshbabu <rrameshbabu@...dia.com>
To: Joe Damato <jdamato@...tly.com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org, tariqt@...dia.com,
Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>, "open list:MELLANOX MLX5 core VPI driver"
<linux-rdma@...r.kernel.org>
Subject: Re: [PATCH net-next] eth: mlx5: link NAPI instances to queues and IRQs
On Mon, 05 Feb, 2024 17:56:58 -0800 Joe Damato <jdamato@...tly.com> wrote:
> On Mon, Feb 05, 2024 at 05:44:27PM -0800, Rahul Rameshbabu wrote:
>>
>> On Mon, 05 Feb, 2024 17:41:52 -0800 Joe Damato <jdamato@...tly.com> wrote:
>> > On Mon, Feb 05, 2024 at 05:33:39PM -0800, Rahul Rameshbabu wrote:
>> >>
>> >> On Mon, 05 Feb, 2024 17:32:47 -0800 Joe Damato <jdamato@...tly.com> wrote:
>> >> > On Mon, Feb 05, 2024 at 05:09:09PM -0800, Rahul Rameshbabu wrote:
>> >> >> On Tue, 06 Feb, 2024 01:03:11 +0000 Joe Damato <jdamato@...tly.com> wrote:
>> >> >> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> >> >> > index c8e8f512803e..e1bfff1fb328 100644
>> >> >> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> >> >> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>> >> >> > @@ -2473,6 +2473,9 @@ static void mlx5e_close_queues(struct mlx5e_channel *c)
>> >> >> > mlx5e_close_tx_cqs(c);
>> >> >> > mlx5e_close_cq(&c->icosq.cq);
>> >> >> > mlx5e_close_cq(&c->async_icosq.cq);
>> >> >> > +
>> >> >> > + netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_TX, NULL);
>> >> >> > + netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_RX, NULL);
>> >> >>
>> >> >> This should be set to NULL *before* actually closing the rqs, sqs, and
>> >> >> related cqs right? I would expect these two lines to be the first ones
>> >> >> called in mlx5e_close_queues. Btw, I think this should be done in
>> >> >> mlx5e_deactivate_channel where the NAPI is disabled.
>> >> >>
>> >> >> > }
>> >> >> >
>> >> >> > static u8 mlx5e_enumerate_lag_port(struct mlx5_core_dev *mdev, int ix)
>> >> >> > @@ -2558,6 +2561,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
>> >> >> > c->stats = &priv->channel_stats[ix]->ch;
>> >> >> > c->aff_mask = irq_get_effective_affinity_mask(irq);
>> >> >> > c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
>> >> >> > + c->irq = irq;
>> >> >> >
>> >> >> > netif_napi_add(netdev, &c->napi, mlx5e_napi_poll);
>> >> >> >
>> >> >> > @@ -2602,6 +2606,10 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
>> >> >> > mlx5e_activate_xsk(c);
>> >> >> > else
>> >> >> > mlx5e_activate_rq(&c->rq);
>> >> >> > +
>> >> >> > + netif_napi_set_irq(&c->napi, c->irq);
>> >>
>> >> One small comment that I missed in my previous iteration. I think the
>> >> above should be moved to mlx5e_open_channel right after netif_napi_add.
>> >> This avoids needing to save the irq in struct mlx5e_channel.
>> >
>> > I couldn't move it to mlx5e_open_channel because of how safe_switch_params
>> > and the mechanics around that seem to work (at least as far as I could
>> > tell).
>> >
>> > mlx5 seems to create a new set of channels before closing the previous
>> > channel. So, moving this logic to open_channels and close_channels means
>> > you end up with a flow like this:
>> >
>> > - Create new channels (NAPI netlink API is used to set NAPIs)
>> > - Old channels are closed (NAPI netlink API sets NULL and overwrites the
>> > previous NAPI netlink calls)
>> >
>> > Now, the associations are all NULL.
>> >
>> > I think moving the calls to active / deactivate fixes that problem, but
>> > requires that irq is stored, if I am understanding the driver correctly.
>>
>> I believe moving the changes to activate / deactivate channels resolves
>> this problem because only one set of channels will be active, so you
>> will no longer have dangling association conflicts for the queue ->
>> napi. This is partially why I suggested the change in that iteration.
>
> As far as I can tell, it does.
>
>> As for netif_napi_set_irq, that alone can be in mlx5e_open_channel (that
>> was the intention of my most recent comment. Not that all the other
>> associations should be moved as well). I agree that the other
>> association calls should be part of activate / deactivate channels.
>
> OK, sure that makes sense. I make that change, too.
>
Also for your v2, it would be great if you can use the commit message
subject 'net/mlx5e: link NAPI instances to queues and IRQs' rather than
'eth: mlx5: link NAPI instances to queues and IRQs'.
--
Thanks,
Rahul Rameshbabu
Powered by blists - more mailing lists