[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181205.123245.1075111432564395434.davem@davemloft.net>
Date: Wed, 05 Dec 2018 12:32:45 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: anssi.hannula@...wise.fi
Cc: nicolas.ferre@...rochip.com, netdev@...r.kernel.org,
harini.katakam@...inx.com, michal.simek@...inx.com
Subject: Re: [PATCH 1/3] net: macb: fix random memory corruption on RX with
64-bit DMA
From: Anssi Hannula <anssi.hannula@...wise.fi>
Date: Fri, 30 Nov 2018 20:21:35 +0200
> @@ -682,6 +682,11 @@ static void macb_set_addr(struct macb *bp, struct macb_dma_desc *desc, dma_addr_
> if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
> desc_64 = macb_64b_desc(bp, desc);
> desc_64->addrh = upper_32_bits(addr);
> + /* The low bits of RX address contain the RX_USED bit, clearing
> + * of which allows packet RX. Make sure the high bits are also
> + * visible to HW at that point.
> + */
> + dma_wmb();
> }
I agree with that dma_wmb() is what should be used here.
We are ordering CPU stores with DMA visibility, which is exactly what
the dma_*() are for.
If it doesn't work properly on some architecture's implementation of dma_*(),
those should be fixed rather than papering over it in the drivers.
Powered by blists - more mailing lists