lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Dec 2022 08:33:45 +0200
From:   Leon Romanovsky <leon@...nel.org>
To:     Saeed Mahameed <saeed@...nel.org>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Saeed Mahameed <saeedm@...dia.com>, netdev@...r.kernel.org,
        Tariq Toukan <tariqt@...dia.com>,
        Shay Drory <shayd@...dia.com>, Moshe Shemesh <moshe@...dia.com>
Subject: Re: [net 04/12] net/mlx5: Avoid recovery in probe flows

On Wed, Dec 28, 2022 at 11:43:23AM -0800, Saeed Mahameed wrote:
> From: Shay Drory <shayd@...dia.com>
> 
> Currently, recovery is done without considering whether the device is
> still in probe flow.
> This may lead to recovery before device have finished probed
> successfully. e.g.: while mlx5_init_one() is running. Recovery flow is
> using functionality that is loaded only by mlx5_init_one(), and there
> is no point in running recovery without mlx5_init_one() finished
> successfully.
> 
> Fix it by waiting for probe flow to finish and checking whether the
> device is probed before trying to perform recovery.
> 
> Fixes: 51d138c2610a ("net/mlx5: Fix health error state handling")
> Signed-off-by: Shay Drory <shayd@...dia.com>
> Reviewed-by: Moshe Shemesh <moshe@...dia.com>
> Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/health.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
> index 86ed87d704f7..96417c5feed7 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
> @@ -674,6 +674,12 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
>  	dev = container_of(priv, struct mlx5_core_dev, priv);
>  	devlink = priv_to_devlink(dev);
>  
> +	mutex_lock(&dev->intf_state_mutex);
> +	if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
> +		mlx5_core_err(dev, "health works are not permitted at this stage\n");
> +		return;
> +	}

This bit is already checked when health recovery is queued in mlx5_trigger_health_work().

  764 void mlx5_trigger_health_work(struct mlx5_core_dev *dev)
  765 {
  766         struct mlx5_core_health *health = &dev->priv.health;
  767         unsigned long flags;
  768
  769         spin_lock_irqsave(&health->wq_lock, flags);
  770         if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
  771                 queue_work(health->wq, &health->fatal_report_work);
  772         else
  773                 mlx5_core_err(dev, "new health works are not permitted at this stage\n");
  774         spin_unlock_irqrestore(&health->wq_lock, flags);
  775 }

You probably need to elevate this check to poll_health() routine and
change intf_state_mutex to be spinlock.

Or another solution is to start health polling only when init complete.

Thanks


> +	mutex_unlock(&dev->intf_state_mutex);
>  	enter_error_state(dev, false);
>  	if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {
>  		devl_lock(devlink);
> -- 
> 2.38.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ