[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1487198494.642023747@decadent.org.uk>
Date: Wed, 15 Feb 2017 22:41:34 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, "David S. Miller" <davem@...emloft.net>,
"Eugenia Emantayev" <eugenia@...lanox.com>,
"Erez Shitrit" <erezsh@...lanox.com>,
"Tariq Toukan" <tariqt@...lanox.com>
Subject: [PATCH 3.2 066/126] net/mlx4_en: Process all completions in RX
rings after port goes up
3.2.85-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Erez Shitrit <erezsh@...lanox.com>
commit 8d59de8f7bb3db296331c665779c653b0c8d13ba upstream.
Currently there is a race between incoming traffic and
initialization flow. HW is able to receive the packets
after INIT_PORT is done and unicast steering is configured.
Before we set priv->port_up NAPI is not scheduled and
receive queues become full. Therefore we never get
new interrupts about the completions.
This issue could happen if running heavy traffic during
bringing port up.
The resolution is to schedule NAPI once port_up is set.
If receive queues were full this will process all cqes
and release them.
Fixes: c27a02cd94d6 ("mlx4_en: Add driver for Mellanox ConnectX 10GbE NIC")
Signed-off-by: Erez Shitrit <erezsh@...lanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@...lanox.com>
Signed-off-by: Tariq Toukan <tariqt@...lanox.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
[bwh: Backported to 3.2: mlx4_en_priv::rx_cq is an array of structs not pointers]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 7 +++++++
1 file changed, 7 insertions(+)
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -684,6 +684,13 @@ int mlx4_en_start_port(struct net_device
queue_work(mdev->workqueue, &priv->mcast_task);
priv->port_up = true;
+
+ /* Process all completions if exist to prevent
+ * the queues freezing if they are full
+ */
+ for (i = 0; i < priv->rx_ring_num; i++)
+ napi_schedule(&priv->rx_cq[i].napi);
+
netif_tx_start_all_queues(dev);
return 0;
Powered by blists - more mailing lists