[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxYZNc+CKWglUphG@test-OptiPlex-Tower-Plus-7010>
Date: Mon, 21 Oct 2024 14:34:53 +0530
From: Hariprasad Kelam <hkelam@...vell.com>
To: Furong Xu <0x1207@...il.com>
CC: <netdev@...r.kernel.org>, <linux-stm32@...md-mailman.stormreply.com>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
Alexandre Torgue <alexandre.torgue@...s.st.com>,
Jose Abreu
<joabreu@...opsys.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet
<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni
<pabeni@...hat.com>,
Maxime Coquelin <mcoquelin.stm32@...il.com>, <xfr@...look.com>,
Suraj Jaiswal <quic_jsuraj@...cinc.com>
Subject: Re: [PATCH net v1] net: stmmac: TSO: Fix unbalanced DMA map/unmap
for non-paged SKB data
On 2024-10-21 at 11:40:23, Furong Xu (0x1207@...il.com) wrote:
> In case the non-paged data of a SKB carries protocol header and protocol
> payload to be transmitted on a certain platform that the DMA AXI address
> width is configured to 40-bit/48-bit, or the size of the non-paged data
> is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI
> address width is configured to 32-bit, then this SKB requires at least
> two DMA transmit descriptors to serve it.
>
> For example, three descriptors are allocated to split one DMA buffer
> mapped from one piece of non-paged data:
> dma_desc[N + 0],
> dma_desc[N + 1],
> dma_desc[N + 2].
> Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold
> extra information to be reused in stmmac_tx_clean():
> tx_q->tx_skbuff_dma[N + 0],
> tx_q->tx_skbuff_dma[N + 1],
> tx_q->tx_skbuff_dma[N + 2].
> Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer
> address returned by DMA mapping call. stmmac_tx_clean() will try to
> unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf
> is a valid buffer address.
>
> The expected behavior that saves DMA buffer address of this non-paged
> data to tx_q->tx_skbuff_dma[entry].buf is:
> tx_q->tx_skbuff_dma[N + 0].buf = NULL;
> tx_q->tx_skbuff_dma[N + 1].buf = NULL;
> tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single();
> Unfortunately, the current code misbehaves like this:
> tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single();
> tx_q->tx_skbuff_dma[N + 1].buf = NULL;
> tx_q->tx_skbuff_dma[N + 2].buf = NULL;
>
> On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the
> DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address
> obviously, then the DMA buffer will be unmapped immediately.
> There may be a rare case that the DMA engine does not finish the
> pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go
> horribly wrong, DMA is going to access a unmapped/unreferenced memory
> region, corrupted data will be transmited or iommu fault will be
> triggered :(
>
> In contrast, the for-loop that maps SKB fragments behaves perfectly
> as expected, and that is how the driver should do for both non-paged
> data and paged frags actually.
>
> This patch corrects DMA map/unmap sequences by fixing the array index
> for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.
>
> Tested and verified on DWXGMAC CORE 3.20a
>
> Reported-by: Suraj Jaiswal <quic_jsuraj@...cinc.com>
> Fixes: f748be531d70 ("stmmac: support new GMAC4")
> Signed-off-by: Furong Xu <0x1207@...il.com>
> ---
> .../net/ethernet/stmicro/stmmac/stmmac_main.c | 22 ++++++++++++++-----
> 1 file changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index d3895d7eecfc..208dbc68aaf9 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -4304,11 +4304,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
> if (dma_mapping_error(priv->device, des))
> goto dma_map_err;
>
> - tx_q->tx_skbuff_dma[first_entry].buf = des;
> - tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
> - tx_q->tx_skbuff_dma[first_entry].map_as_page = false;
> - tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB;
> -
> if (priv->dma_cap.addr64 <= 32) {
> first->des0 = cpu_to_le32(des);
>
> @@ -4327,6 +4322,23 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
>
> stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
>
> + /* In case two or more DMA transmit descriptors are allocated for this
> + * non-paged SKB data, the DMA buffer address should be saved to
> + * tx_q->tx_skbuff_dma[].buf corresponding to the last descriptor,
> + * and leave the other tx_q->tx_skbuff_dma[].buf as NULL to guarantee
> + * that stmmac_tx_clean() does not unmap the entire DMA buffer too early
> + * since the tail areas of the DMA buffer can be accessed by DMA engine
> + * sooner or later.
> + * By saving the DMA buffer address to tx_q->tx_skbuff_dma[].buf
> + * corresponding to the last descriptor, stmmac_tx_clean() will unmap
> + * this DMA buffer right after the DMA engine completely finishes the
> + * full buffer transmission.
> + */
> + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des;
> + tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb);
> + tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false;
> + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB;
> +
> /* Prepare fragments */
> for (i = 0; i < nfrags; i++) {
> const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
> --
Reviewed-by: Hariprasad Kelam <hkelam@...vell.com>
Powered by blists - more mailing lists