lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <CY8PR12MB7195E405C3EC9F43619231CCDC4B2@CY8PR12MB7195.namprd12.prod.outlook.com>
Date: Tue, 29 Oct 2024 04:08:21 +0000
From: Parav Pandit <parav@...dia.com>
To: Caleb Sander Mateos <csander@...estorage.com>, Saeed Mahameed
	<saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Tariq Toukan
	<tariqt@...dia.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
	<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
	<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] mlx5: only schedule EQ comp tasklet if necessary

Hi 

> From: Caleb Sander Mateos <csander@...estorage.com>
> Sent: Sunday, October 27, 2024 9:37 AM
> 
> Currently, the mlx5_eq_comp_int() interrupt handler schedules a tasklet to call
> mlx5_cq_tasklet_cb() if it processes any completions. For CQs whose
> completions don't need to be processed in tasklet context, this overhead is
> unnecessary. Atomic operations are needed to schedule, lock, and clear the
> tasklet. And when mlx5_cq_tasklet_cb() runs, it acquires a spin lock to access
> the list of CQs enqueued for processing.
> 
> Schedule the tasklet in mlx5_add_cq_to_tasklet() instead to avoid this
> overhead. mlx5_add_cq_to_tasklet() is responsible for enqueuing the CQs to
> be processed in tasklet context, so it can schedule the tasklet. CQs that need
> tasklet processing have their interrupt comp handler set to
> mlx5_add_cq_to_tasklet(), so they will schedule the tasklet. CQs that don't
> need tasklet processing won't schedule the tasklet. To avoid scheduling the
> tasklet multiple times during the same interrupt, only schedule the tasklet in
> mlx5_add_cq_to_tasklet() if the tasklet work queue was empty before the
> new CQ was pushed to it.
> 
> Note that the mlx4 driver works the same way: it schedules the tasklet in
> mlx4_add_cq_to_tasklet() and only if the work queue was empty before.
> 
> Signed-off-by: Caleb Sander Mateos <csander@...estorage.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/cq.c | 5 +++++
> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 5 +----
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
> b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
> index 4caa1b6f40ba..25f3b26db729 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
> @@ -69,22 +69,27 @@ void mlx5_cq_tasklet_cb(struct tasklet_struct *t)
> static void mlx5_add_cq_to_tasklet(struct mlx5_core_cq *cq,
>  				   struct mlx5_eqe *eqe)
>  {
>  	unsigned long flags;
>  	struct mlx5_eq_tasklet *tasklet_ctx = cq->tasklet_ctx.priv;
> +	bool schedule_tasklet = false;
> 
>  	spin_lock_irqsave(&tasklet_ctx->lock, flags);
>  	/* When migrating CQs between EQs will be implemented, please note
>  	 * that you need to sync this point. It is possible that
>  	 * while migrating a CQ, completions on the old EQs could
>  	 * still arrive.
>  	 */
>  	if (list_empty_careful(&cq->tasklet_ctx.list)) {
>  		mlx5_cq_hold(cq);
> +		schedule_tasklet = list_empty(&tasklet_ctx->list);
>  		list_add_tail(&cq->tasklet_ctx.list, &tasklet_ctx->list);
>  	}
>  	spin_unlock_irqrestore(&tasklet_ctx->lock, flags);
> +
> +	if (schedule_tasklet)
> +		tasklet_schedule(&tasklet_ctx->task);
>  }
> 
>  /* Callers must verify outbox status in case of err */  int mlx5_create_cq(struct
> mlx5_core_dev *dev, struct mlx5_core_cq *cq,
>  		   u32 *in, int inlen, u32 *out, int outlen) diff --git
> a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> index 68cb86b37e56..66fc17d9c949 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
> @@ -112,17 +112,17 @@ static int mlx5_eq_comp_int(struct notifier_block
> *nb,
>  	struct mlx5_eq_comp *eq_comp =
>  		container_of(nb, struct mlx5_eq_comp, irq_nb);
>  	struct mlx5_eq *eq = &eq_comp->core;
>  	struct mlx5_eqe *eqe;
>  	int num_eqes = 0;
> -	u32 cqn = -1;
> 
>  	eqe = next_eqe_sw(eq);
>  	if (!eqe)
>  		goto out;
> 
>  	do {
> +		u32 cqn;
>  		struct mlx5_core_cq *cq;
> 
A small nit, cqn should be declared after cq to follow the netdev coding guidelines of [1].

>  		/* Make sure we read EQ entry contents after we've
>  		 * checked the ownership bit.
>  		 */
> @@ -145,13 +145,10 @@ static int mlx5_eq_comp_int(struct notifier_block
> *nb,
>  	} while ((++num_eqes < MLX5_EQ_POLLING_BUDGET) && (eqe =
> next_eqe_sw(eq)));
> 
>  out:
>  	eq_update_ci(eq, 1);
> 
> -	if (cqn != -1)
> -		tasklet_schedule(&eq_comp->tasklet_ctx.task);
> -
Current code processes many EQEs and performs the check for tasklet_schedule only once in the cqn check.
While this change, on every EQE, the additional check will be done.
This will marginally make the interrupt handler slow.
Returning a bool from comp() wont be good either, and we cannot inline things here due to function pointer.

The cost of scheduling null tasklet is higher than this if (schedule_tasklet) check.
In other series internally, I am working to reduce the cost of comp() itself unrelated to this change.
so it ok to have the additional check introduced here.

Apart from that,

Reviewed-by: Parav Pandit <parav@...dia.com>

[1] https://elixir.bootlin.com/linux/v6.11.5/source/Documentation/process/maintainer-netdev.rst#L358

>  	return 0;
>  }
> 
>  /* Some architectures don't latch interrupts when they are disabled, so using
>   * mlx5_eq_poll_irq_disabled could end up losing interrupts while trying to
> --
> 2.45.2
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ