[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220830153250.15496-2-mika.westerberg@linux.intel.com>
Date: Tue, 30 Aug 2022 18:32:46 +0300
From: Mika Westerberg <mika.westerberg@...ux.intel.com>
To: linux-usb@...r.kernel.org
Cc: Michael Jamet <michael.jamet@...el.com>,
Mika Westerberg <mika.westerberg@...ux.intel.com>,
Yehezkel Bernat <YehezkelShB@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Andreas Noever <andreas.noever@...il.com>,
Lukas Wunner <lukas@...ner.de>, netdev@...r.kernel.org
Subject: [PATCH 1/5] net: thunderbolt: Enable DMA paths only after rings are enabled
If the other host starts sending packets early on it is possible that we
are still in the middle of populating the initial Rx ring packets to the
ring. This causes the tbnet_poll() to mess over the queue and causes
list corruption. This happens specifically when connected with macOS as
it seems start sending various IP discovery packets as soon as its side
of the paths are configured.
To prevent this we move the DMA path enabling to happen after we have
primed the Rx ring. This makes sure no incoming packets can arrive
before we are ready to handle them.
Fixes: e69b6c02b4c3 ("net: Add support for networking over Thunderbolt cable")
Cc: stable@...r.kernel.org
Signed-off-by: Mika Westerberg <mika.westerberg@...ux.intel.com>
---
drivers/net/thunderbolt.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
index ff5d0e98a088..ab3f04562980 100644
--- a/drivers/net/thunderbolt.c
+++ b/drivers/net/thunderbolt.c
@@ -612,18 +612,13 @@ static void tbnet_connected_work(struct work_struct *work)
return;
}
- /* Both logins successful so enable the high-speed DMA paths and
- * start the network device queue.
+ /* Both logins successful so enable the rings, high-speed DMA
+ * paths and start the network device queue.
+ *
+ * Note we enable the DMA paths last to make sure we have primed
+ * the Rx ring before any incoming packets are allowed to
+ * arrive.
*/
- ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
- net->rx_ring.ring->hop,
- net->remote_transmit_path,
- net->tx_ring.ring->hop);
- if (ret) {
- netdev_err(net->dev, "failed to enable DMA paths\n");
- return;
- }
-
tb_ring_start(net->tx_ring.ring);
tb_ring_start(net->rx_ring.ring);
@@ -635,10 +630,21 @@ static void tbnet_connected_work(struct work_struct *work)
if (ret)
goto err_free_rx_buffers;
+ ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
+ net->rx_ring.ring->hop,
+ net->remote_transmit_path,
+ net->tx_ring.ring->hop);
+ if (ret) {
+ netdev_err(net->dev, "failed to enable DMA paths\n");
+ goto err_free_tx_buffers;
+ }
+
netif_carrier_on(net->dev);
netif_start_queue(net->dev);
return;
+err_free_tx_buffers:
+ tbnet_free_buffers(&net->tx_ring);
err_free_rx_buffers:
tbnet_free_buffers(&net->rx_ring);
err_stop_rings:
--
2.35.1
Powered by blists - more mailing lists