[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190716164221.15436-2-robdclark@gmail.com>
Date: Tue, 16 Jul 2019 09:42:15 -0700
From: Rob Clark <robdclark@...il.com>
To: dri-devel@...ts.freedesktop.org
Cc: Rob Clark <robdclark@...omium.org>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Emil Velikov <emil.velikov@...labora.com>,
Thomas Zimmermann <tzimmermann@...e.de>,
Rodrigo Siqueira <rodrigosiqueiramelo@...il.com>,
Eric Biggers <ebiggers@...gle.com>,
Imre Deak <imre.deak@...el.com>,
Deepak Sharma <deepak.sharma@....com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/2] drm/vgem: use normal cached mmap'ings
From: Rob Clark <robdclark@...omium.org>
Since there is no real device associated with vgem, it is impossible to
end up with appropriate dev->dma_ops, meaning that we have no way to
invalidate the shmem pages allocated by vgem. So, at least on platforms
without drm_cflush_pages(), we end up with corruption when cache lines
from previous usage of vgem bo pages get evicted to memory.
The only sane option is to use cached mappings.
Signed-off-by: Rob Clark <robdclark@...omium.org>
---
Possibly we could dma_sync_*_for_{device,cpu}() on dmabuf attach/detach,
although the ->gem_prime_{pin,unpin}() API isn't quite ideal for that as
it is. And that doesn't really help for drivers that don't attach/
detach for each use.
But AFAICT vgem is mainly used for dmabuf testing, so maybe we don't
need to care too much about use of cached mmap'ings.
drivers/gpu/drm/vgem/vgem_drv.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 11a8f99ba18c..ccf0c3fbd586 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -259,9 +259,6 @@ static int vgem_mmap(struct file *filp, struct vm_area_struct *vma)
if (ret)
return ret;
- /* Keep the WC mmaping set by drm_gem_mmap() but our pages
- * are ordinary and not special.
- */
vma->vm_flags = flags | VM_DONTEXPAND | VM_DONTDUMP;
return 0;
}
@@ -382,7 +379,7 @@ static void *vgem_prime_vmap(struct drm_gem_object *obj)
if (IS_ERR(pages))
return NULL;
- return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+ return vmap(pages, n_pages, 0, PAGE_KERNEL);
}
static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
@@ -411,7 +408,7 @@ static int vgem_prime_mmap(struct drm_gem_object *obj,
fput(vma->vm_file);
vma->vm_file = get_file(obj->filp);
vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
- vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
return 0;
}
--
2.21.0
Powered by blists - more mailing lists