lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1486403641.7793.43.camel@edumazet-glaptop3.roam.corp.google.com>
Date:   Mon, 06 Feb 2017 09:54:01 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Benjamin Poirier <bpoirier@...e.com>
Cc:     netdev@...r.kernel.org, Tariq Toukan <tariqt@...lanox.com>,
        Yishai Hadas <yishaih@...lanox.com>, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net] mlx4: Invoke softirqs after napi_reschedule

On Mon, 2017-02-06 at 09:10 -0800, Benjamin Poirier wrote:
> mlx4 may schedule napi from a workqueue. Afterwards, softirqs are not run
> in a deterministic time frame and the following message may be logged:
> NOHZ: local_softirq_pending 08
> 
> The problem is the same as what was described in commit ec13ee80145c
> ("virtio_net: invoke softirqs after __napi_schedule") and this patch
> applies the same fix to mlx4.
> 
> Cc: Eric Dumazet <eric.dumazet@...il.com>
> Signed-off-by: Benjamin Poirier <bpoirier@...e.com>
> ---
>  drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index eac527e25ec9..14ce1549b638 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -513,10 +513,12 @@ void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
>  	if (!priv->port_up)
>  		return;
>  
> +	local_bh_disable();
>  	for (ring = 0; ring < priv->rx_ring_num; ring++) {
>  		if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
>  			napi_reschedule(&priv->rx_cq[ring]->napi);
>  	}
> +	local_bh_enable();
>  }
>  

I would prefer having the local_bh_disable()/enable inside the loop,
as done with commit 8cf699ec849f4ca1413cea01289bd7d37dbcc626

This gives more chance to service one queue at a time, avoiding to
capture all queues into one cpu.

Thanks.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ