lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190805211451.20176-2-robdclark@gmail.com>
Date:   Mon,  5 Aug 2019 14:14:34 -0700
From:   Rob Clark <robdclark@...il.com>
To:     dri-devel@...ts.freedesktop.org
Cc:     Christoph Hellwig <hch@....de>, Rob Clark <robdclark@...omium.org>,
        Rob Clark <robdclark@...il.com>, Sean Paul <sean@...rly.run>,
        David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>, linux-arm-msm@...r.kernel.org,
        freedreno@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: [PATCH 2/2] drm/msm: use drm_cache when available

From: Rob Clark <robdclark@...omium.org>

For a long time drm/msm had been abusing dma_map_* or dma_sync_* to
clean pages for buffers with uncached/writecombine CPU mmap'ings.

But drm/msm is managing it's own iommu domains, and really doesn't want
the additional functionality provided by various DMA API ops.

Let's just cut the abstraction and use drm_cache where possible.

Signed-off-by: Rob Clark <robdclark@...omium.org>
---
 drivers/gpu/drm/msm/msm_gem.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 8cf6362e64bf..af19ef20d0d5 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -9,6 +9,8 @@
 #include <linux/dma-buf.h>
 #include <linux/pfn_t.h>
 
+#include <drm/drm_cache.h>
+
 #include "msm_drv.h"
 #include "msm_fence.h"
 #include "msm_gem.h"
@@ -48,6 +50,7 @@ static bool use_pages(struct drm_gem_object *obj)
 
 static void sync_for_device(struct msm_gem_object *msm_obj)
 {
+#if !defined(HAS_DRM_CACHE)
 	struct device *dev = msm_obj->base.dev->dev;
 
 	if (get_dma_ops(dev)) {
@@ -57,10 +60,14 @@ static void sync_for_device(struct msm_gem_object *msm_obj)
 		dma_map_sg(dev, msm_obj->sgt->sgl,
 			msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
 	}
+#else
+	drm_clflush_sg(msm_obj->sgt);
+#endif
 }
 
 static void sync_for_cpu(struct msm_gem_object *msm_obj)
 {
+#if !defined(HAS_DRM_CACHE)
 	struct device *dev = msm_obj->base.dev->dev;
 
 	if (get_dma_ops(dev)) {
@@ -70,6 +77,7 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj)
 		dma_unmap_sg(dev, msm_obj->sgt->sgl,
 			msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
 	}
+#endif
 }
 
 /* allocate pages from VRAM carveout, used when no IOMMU: */
-- 
2.21.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ