[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221121095631.216209-2-hch@lst.de>
Date: Mon, 21 Nov 2022 10:56:30 +0100
From: Christoph Hellwig <hch@....de>
To: Greg Ungerer <gerg@...ux-m68k.org>,
Joakim Zhang <qiangqing.zhang@....com>
Cc: Geert Uytterhoeven <geert@...ux-m68k.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
linux-m68k@...ts.linux-m68k.org, uclinux-dev@...inux.org,
netdev@...r.kernel.org
Subject: [PATCH 1/2] net: fec: use dma_alloc_noncoherent for m532x
The m532x coldfire platforms can't properly implement dma_alloc_coherent
and currently just return noncoherent memory from it. The fec driver
than works around this with a flush of all caches in the receive path.
Make this hack a little less bad by using the explicit
dma_alloc_noncoherent API and documenting the hacky cache flushes.
Signed-off-by: Christoph Hellwig <hch@....de>
---
drivers/net/ethernet/freescale/fec_main.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 28ef4d3c18789..5230698310b5e 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -1580,6 +1580,10 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
struct page *page;
#ifdef CONFIG_M532x
+ /*
+ * Hacky flush of all caches instead of using the DMA API for the TSO
+ * headers.
+ */
flush_cache_all();
#endif
rxq = fep->rx_queue[queue_id];
@@ -3123,10 +3127,17 @@ static void fec_enet_free_queue(struct net_device *ndev)
for (i = 0; i < fep->num_tx_queues; i++)
if (fep->tx_queue[i] && fep->tx_queue[i]->tso_hdrs) {
txq = fep->tx_queue[i];
+#ifdef CONFIG_M532x
dma_free_coherent(&fep->pdev->dev,
txq->bd.ring_size * TSO_HEADER_SIZE,
txq->tso_hdrs,
txq->tso_hdrs_dma);
+#else
+ dma_free_noncoherent(&fep->pdev->dev,
+ txq->bd.ring_size * TSO_HEADER_SIZE,
+ txq->tso_hdrs, txq->tso_hdrs_dma,
+ DMA_BIDIRECTIONAL);
+#endif
}
for (i = 0; i < fep->num_rx_queues; i++)
@@ -3157,10 +3168,18 @@ static int fec_enet_alloc_queue(struct net_device *ndev)
txq->tx_wake_threshold =
(txq->bd.ring_size - txq->tx_stop_threshold) / 2;
+#ifdef CONFIG_M532x
txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev,
txq->bd.ring_size * TSO_HEADER_SIZE,
&txq->tso_hdrs_dma,
GFP_KERNEL);
+#else
+ /* m68knommu manually flushes all caches in fec_enet_rx_queue */
+ txq->tso_hdrs = dma_alloc_noncoherent(&fep->pdev->dev,
+ txq->bd.ring_size * TSO_HEADER_SIZE,
+ &txq->tso_hdrs_dma, DMA_BIDIRECTIONAL,
+ GFP_KERNEL);
+#endif
if (!txq->tso_hdrs) {
ret = -ENOMEM;
goto alloc_failed;
--
2.30.2
Powered by blists - more mailing lists