[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241205091830.3719609-1-0x1207@gmail.com>
Date: Thu, 5 Dec 2024 17:18:30 +0800
From: Furong Xu <0x1207@...il.com>
To: netdev@...r.kernel.org,
linux-stm32@...md-mailman.stormreply.com,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Cc: andrew+netdev@...n.ch,
Alexandre Torgue <alexandre.torgue@...s.st.com>,
Jose Abreu <joabreu@...opsys.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Maxime Coquelin <mcoquelin.stm32@...il.com>,
xfr@...look.com,
Furong Xu <0x1207@...il.com>,
Jon Hunter <jonathanh@...dia.com>,
Thierry Reding <thierry.reding@...il.com>,
Russell King <linux@...linux.org.uk>
Subject: [PATCH net v1] net: stmmac: TSO: Fix unaligned DMA unmap for non-paged SKB data
Commit 66600fac7a98 ("net: stmmac: TSO: Fix unbalanced DMA map/unmap for
non-paged SKB data") assigns a wrong DMA buffer address that is added an
offset of proto_hdr_len to tx_q->tx_skbuff_dma[entry].buf on a certain
platform that the DMA AXI address width is configured to 40-bit/48-bit,
stmmac_tx_clean() will try to unmap this illegal DMA buffer address
and many crashes are reported: [1] [2].
This patch guarantees that DMA address is passed to stmmac_tx_clean()
unmodified and without offset.
[1] https://lore.kernel.org/all/d8112193-0386-4e14-b516-37c2d838171a@nvidia.com/
[2] https://lore.kernel.org/all/klkzp5yn5kq5efgtrow6wbvnc46bcqfxs65nz3qy77ujr5turc@bwwhelz2l4dw/
Reported-by: Jon Hunter <jonathanh@...dia.com>
Reported-by: Thierry Reding <thierry.reding@...il.com>
Suggested-by: Russell King (Oracle) <linux@...linux.org.uk>
Fixes: 66600fac7a98 ("net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data")
Signed-off-by: Furong Xu <0x1207@...il.com>
---
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 9b262cdad60b..7227f8428b5e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -4192,8 +4192,8 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
struct stmmac_txq_stats *txq_stats;
struct stmmac_tx_queue *tx_q;
u32 pay_len, mss, queue;
+ dma_addr_t tso_hdr, des;
u8 proto_hdr_len, hdr;
- dma_addr_t des;
bool set_ic;
int i;
@@ -4279,6 +4279,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
DMA_TO_DEVICE);
if (dma_mapping_error(priv->device, des))
goto dma_map_err;
+ tso_hdr = des;
if (priv->dma_cap.addr64 <= 32) {
first->des0 = cpu_to_le32(des);
@@ -4310,7 +4311,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
* this DMA buffer right after the DMA engine completely finishes the
* full buffer transmission.
*/
- tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des;
+ tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = tso_hdr;
tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb);
tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false;
tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB;
--
2.34.1
Powered by blists - more mailing lists