[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181126213710.3084-1-vivek.gautam@codeaurora.org>
Date: Tue, 27 Nov 2018 03:07:10 +0530
From: Vivek Gautam <vivek.gautam@...eaurora.org>
To: airlied@...ux.ie, robdclark@...il.com
Cc: tfiga@...omium.org, linux-kernel@...r.kernel.org,
freedreno@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-arm-msm@...r.kernel.org,
Vivek Gautam <vivek.gautam@...eaurora.org>,
Jordan Crouse <jcrouse@...eaurora.org>,
Sean Paul <seanpaul@...omium.org>
Subject: [PATCH v2 1/1] drm: msm: Replace dma_map_sg with dma_sync_sg*
dma_map_sg() expects a DMA domain. However, the drm devices
have been traditionally using unmanaged iommu domain which
is non-dma type. Using dma mapping APIs with that domain is bad.
Replace dma_map_sg() calls with dma_sync_sg_for_device{|cpu}()
to do the cache maintenance.
Signed-off-by: Vivek Gautam <vivek.gautam@...eaurora.org>
Suggested-by: Tomasz Figa <tfiga@...omium.org>
Cc: Rob Clark <robdclark@...il.com>
Cc: Jordan Crouse <jcrouse@...eaurora.org>
Cc: Sean Paul <seanpaul@...omium.org>
---
Changes since v1:
- Addressed Jordan's and Tomasz's comments for
- moving sg dma addresses preparation out of the coditional
check to the main path so we do it irrespective of whether
the buffer is cached or uncached.
- Enhance the comment to explain this dma addresses prepartion.
drivers/gpu/drm/msm/msm_gem.c | 31 ++++++++++++++++++++++---------
1 file changed, 22 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 00c795ced02c..1811ac23a31c 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -81,6 +81,8 @@ static struct page **get_pages(struct drm_gem_object *obj)
struct drm_device *dev = obj->dev;
struct page **p;
int npages = obj->size >> PAGE_SHIFT;
+ struct scatterlist *s;
+ int i;
if (use_pages(obj))
p = drm_gem_get_pages(obj);
@@ -104,12 +106,21 @@ static struct page **get_pages(struct drm_gem_object *obj)
return ptr;
}
- /* For non-cached buffers, ensure the new pages are clean
+ /*
+ * dma_sync_sg_*() flush the physical pages, so point
+ * sg->dma_address to the physical ones for the right behavior.
+ */
+ for_each_sg(msm_obj->sgt->sgl, s, msm_obj->sgt->nents, i)
+ sg_dma_address(s) = sg_phys(s);
+
+ /*
+ * For non-cached buffers, ensure the new pages are clean
* because display controller, GPU, etc. are not coherent:
*/
- if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
- dma_map_sg(dev->dev, msm_obj->sgt->sgl,
- msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+ if (msm_obj->flags & (MSM_BO_WC | MSM_BO_UNCACHED))
+ dma_sync_sg_for_device(dev->dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents,
+ DMA_TO_DEVICE);
}
return msm_obj->pages;
@@ -133,14 +144,16 @@ static void put_pages(struct drm_gem_object *obj)
if (msm_obj->pages) {
if (msm_obj->sgt) {
- /* For non-cached buffers, ensure the new
+ /*
+ * For non-cached buffers, ensure the new
* pages are clean because display controller,
* GPU, etc. are not coherent:
*/
- if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
- dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
- msm_obj->sgt->nents,
- DMA_BIDIRECTIONAL);
+ if (msm_obj->flags & (MSM_BO_WC | MSM_BO_UNCACHED))
+ dma_sync_sg_for_cpu(obj->dev->dev,
+ msm_obj->sgt->sgl,
+ msm_obj->sgt->nents,
+ DMA_FROM_DEVICE);
sg_free_table(msm_obj->sgt);
kfree(msm_obj->sgt);
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists