[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y63cpg47dpl7c6BM@x130>
Date: Thu, 29 Dec 2022 10:29:58 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Leon Romanovsky <leon@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Saeed Mahameed <saeedm@...dia.com>, netdev@...r.kernel.org,
Tariq Toukan <tariqt@...dia.com>,
Shay Drory <shayd@...dia.com>, Moshe Shemesh <moshe@...dia.com>
Subject: Re: [net 04/12] net/mlx5: Avoid recovery in probe flows
On 29 Dec 08:33, Leon Romanovsky wrote:
>On Wed, Dec 28, 2022 at 11:43:23AM -0800, Saeed Mahameed wrote:
>> From: Shay Drory <shayd@...dia.com>
>>
>> Currently, recovery is done without considering whether the device is
>> still in probe flow.
>> This may lead to recovery before device have finished probed
>> successfully. e.g.: while mlx5_init_one() is running. Recovery flow is
>> using functionality that is loaded only by mlx5_init_one(), and there
>> is no point in running recovery without mlx5_init_one() finished
>> successfully.
>>
>> Fix it by waiting for probe flow to finish and checking whether the
>> device is probed before trying to perform recovery.
>>
>> Fixes: 51d138c2610a ("net/mlx5: Fix health error state handling")
>> Signed-off-by: Shay Drory <shayd@...dia.com>
>> Reviewed-by: Moshe Shemesh <moshe@...dia.com>
>> Signed-off-by: Saeed Mahameed <saeedm@...dia.com>
>> ---
>> drivers/net/ethernet/mellanox/mlx5/core/health.c | 6 ++++++
>> 1 file changed, 6 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
>> index 86ed87d704f7..96417c5feed7 100644
>> --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
>> @@ -674,6 +674,12 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
>> dev = container_of(priv, struct mlx5_core_dev, priv);
>> devlink = priv_to_devlink(dev);
>>
>> + mutex_lock(&dev->intf_state_mutex);
>> + if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
>> + mlx5_core_err(dev, "health works are not permitted at this stage\n");
>> + return;
>> + }
>
>This bit is already checked when health recovery is queued in mlx5_trigger_health_work().
>
> 764 void mlx5_trigger_health_work(struct mlx5_core_dev *dev)
> 765 {
> 766 struct mlx5_core_health *health = &dev->priv.health;
> 767 unsigned long flags;
> 768
> 769 spin_lock_irqsave(&health->wq_lock, flags);
> 770 if (!test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags))
> 771 queue_work(health->wq, &health->fatal_report_work);
> 772 else
> 773 mlx5_core_err(dev, "new health works are not permitted at this stage\n");
> 774 spin_unlock_irqrestore(&health->wq_lock, flags);
> 775 }
>
>You probably need to elevate this check to poll_health() routine and
>change intf_state_mutex to be spinlock.
not possible, big design change to the driver..
>
>Or another solution is to start health polling only when init complete.
>
Also very complex and very risky to do in rc.
Health poll should be running on dynamic driver reloads,
for example devlink reload, but not on first probe..
if we are going to start after probe then we will have to stop (sync) any
health work before .remove, which is a locking nightmare.. we've been there
before.
Powered by blists - more mailing lists