lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240206171159.GA11565@fastly.com>
Date: Tue, 6 Feb 2024 09:12:00 -0800
From: Joe Damato <jdamato@...tly.com>
To: Tariq Toukan <ttoukan.linux@...il.com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	Gal Pressman <gal@...dia.com>, tariqt@...dia.com,
	rrameshbabu@...dia.com, Saeed Mahameed <saeedm@...dia.com>,
	Leon Romanovsky <leon@...nel.org>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	"open list:MELLANOX MLX5 core VPI driver" <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH net-next] eth: mlx5: link NAPI instances to queues and
 IRQs

On Tue, Feb 06, 2024 at 10:11:28AM +0200, Tariq Toukan wrote:
> 
> 
> On 06/02/2024 3:03, Joe Damato wrote:
> >Make mlx5 compatible with the newly added netlink queue GET APIs.
> >
> >Signed-off-by: Joe Damato <jdamato@...tly.com>
> 
> + Gal
> 
> Hi Joe,
> Thanks for your patch.
> 
> We have already prepared a similar patch, and it's part of our internal
> submission queue, and planned to be submitted soon.
> 
> Please see my comments below, let us know if you're welling to respin a V2
> or wait for our patch.

Do you have a rough estimate on when it'll be submitted?

If it's several months out I'll try again, but if it's expected to be
submit in the next few weeks I'll wait for your official change.

BTW, are the per queue coalesce changes in that same queue? It was
mentioned previously [1] that this feature is coming after we submit a
simple attempt as an RFC. If that feature isn't planned or won't be submit
anytime soon, can you let us know and we can try to attempt an RFC v3 for
it?

[1]: https://lore.kernel.org/lkml/874jj34e67.fsf@nvidia.com/

> >---
> >  drivers/net/ethernet/mellanox/mlx5/core/en.h      | 1 +
> >  drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 8 ++++++++
> >  2 files changed, 9 insertions(+)
> >
> >diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> >index 55c6ace0acd5..3f86ee1831a8 100644
> >--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> >+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> >@@ -768,6 +768,7 @@ struct mlx5e_channel {
> >  	u16                        qos_sqs_size;
> >  	u8                         num_tc;
> >  	u8                         lag_port;
> >+	unsigned int		   irq;
> >  	/* XDP_REDIRECT */
> >  	struct mlx5e_xdpsq         xdpsq;
> >diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> >index c8e8f512803e..e1bfff1fb328 100644
> >--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> >+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> >@@ -2473,6 +2473,9 @@ static void mlx5e_close_queues(struct mlx5e_channel *c)
> >  	mlx5e_close_tx_cqs(c);
> >  	mlx5e_close_cq(&c->icosq.cq);
> >  	mlx5e_close_cq(&c->async_icosq.cq);
> >+
> >+	netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_TX, NULL);
> >+	netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_RX, NULL);
> >  }
> >  static u8 mlx5e_enumerate_lag_port(struct mlx5_core_dev *mdev, int ix)
> >@@ -2558,6 +2561,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
> >  	c->stats    = &priv->channel_stats[ix]->ch;
> >  	c->aff_mask = irq_get_effective_affinity_mask(irq);
> >  	c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
> >+	c->irq		= irq;
> >  	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll);
> >@@ -2602,6 +2606,10 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
> >  		mlx5e_activate_xsk(c);
> >  	else
> >  		mlx5e_activate_rq(&c->rq);
> >+
> >+	netif_napi_set_irq(&c->napi, c->irq);
> 
> Can be safely moved to mlx5e_open_channel without interfering with other
> existing mapping. This would save the new irq field in mlx5e_channel.

Sure, yea, I have that change queued already from last night.

> >+	netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_TX, &c->napi);
> 
> In some configurations we have multiple txqs per channel, so the txq indices
> are not 1-to-1 with their channel index.
> 
> This should be called per each txq, with the proper txq index.
>
> It should be done also for feture-dedicated SQs (like QOS/HTB).

OK. I think the above makes sense and I'll look into it if I have some time
this week.
 
> >+	netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_RX, &c->napi);
> 
> For consistency, I'd move this one as well, to match the TX handling.

Sure.

> >  }
> >  static void mlx5e_deactivate_channel(struct mlx5e_channel *c)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ