[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1531823403-3683-5-git-send-email-hannah@marvell.com>
Date: Tue, 17 Jul 2018 13:30:02 +0300
From: <hannah@...vell.com>
To: <dan.j.williams@...el.com>, <vkoul@...nel.org>,
<dmaengine@...r.kernel.org>
CC: <thomas.petazzoni@...tlin.com>, <linux-kernel@...r.kernel.org>,
<nadavh@...vell.com>, <omrii@...vell.com>, <oferh@...vell.com>,
<gregory.clement@...tlin.com>, Hanna Hawa <hannah@...vell.com>
Subject: [PATCH 4/5] dmaengine: mv_xor_v2: move unmap to before callback
From: Hanna Hawa <hannah@...vell.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Hanna Hawa <hannah@...vell.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@...tlin.com>
---
drivers/dma/mv_xor_v2.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
index 14e2a7a..d41d916 100644
--- a/drivers/dma/mv_xor_v2.c
+++ b/drivers/dma/mv_xor_v2.c
@@ -589,10 +589,9 @@ static void mv_xor_v2_tasklet(unsigned long data)
*/
dma_cookie_complete(&next_pending_sw_desc->async_tx);
+ dma_descriptor_unmap(&next_pending_sw_desc->async_tx);
dmaengine_desc_get_callback_invoke(
&next_pending_sw_desc->async_tx, NULL);
-
- dma_descriptor_unmap(&next_pending_sw_desc->async_tx);
}
dma_run_dependencies(&next_pending_sw_desc->async_tx);
--
1.9.1
Powered by blists - more mailing lists