[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <03f24864-9d4d-b4f9-354a-f3b271c0ae66@csgroup.eu>
Date: Fri, 20 May 2022 05:39:38 +0000
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Mans Rullgard <mans@...sr.com>,
Pantelis Antoniou <pantelis.antoniou@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Vitaly Bordug <vbordug@...mvista.com>,
Dan Malek <dan@...eddededge.com>,
Joakim Tjernlund <joakim.tjernlund@...entis.se>,
Christophe Leroy <christophe.leroy@...roup.eu>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] net: fs_enet: sync rx dma buffer before reading
Le 19/05/2022 à 21:24, Mans Rullgard a écrit :
> The dma_sync_single_for_cpu() call must precede reading the received
> data. Fix this.
See original commit 070e1f01827c. It explicitely says that the cache
must be invalidate _AFTER_ the copy.
The cache is initialy invalidated by dma_map_single(), so before the
copy the cache is already clean.
After the copy, data is in the cache. In order to allow re-use of the
skb, it must be put back in the same condition as before, in extenso the
cache must be invalidated in order to be in the same situation as after
dma_map_single().
So I think your change is wrong.
>
> Fixes: 070e1f01827c ("net: fs_enet: don't unmap DMA when packet len is below copybreak")
> Signed-off-by: Mans Rullgard <mans@...sr.com>
> ---
> drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
> index b3dae17e067e..432ce10cbfd0 100644
> --- a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
> +++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
> @@ -240,14 +240,14 @@ static int fs_enet_napi(struct napi_struct *napi, int budget)
> /* +2 to make IP header L1 cache aligned */
> skbn = netdev_alloc_skb(dev, pkt_len + 2);
> if (skbn != NULL) {
> + dma_sync_single_for_cpu(fep->dev,
> + CBDR_BUFADDR(bdp),
> + L1_CACHE_ALIGN(pkt_len),
> + DMA_FROM_DEVICE);
> skb_reserve(skbn, 2); /* align IP header */
> skb_copy_from_linear_data(skb,
> skbn->data, pkt_len);
> swap(skb, skbn);
> - dma_sync_single_for_cpu(fep->dev,
> - CBDR_BUFADDR(bdp),
> - L1_CACHE_ALIGN(pkt_len),
> - DMA_FROM_DEVICE);
> }
> } else {
> skbn = netdev_alloc_skb(dev, ENET_RX_FRSIZE);
> --
> 2.35.1
>
Powered by blists - more mailing lists