[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1490689242-5097-1-git-send-email-boris.brezillon@free-electrons.com>
Date: Tue, 28 Mar 2017 10:20:42 +0200
From: Boris Brezillon <boris.brezillon@...e-electrons.com>
To: Chris Zankel <chris@...kel.net>, Max Filippov <jcmvbkbc@...il.com>,
linux-xtensa@...ux-xtensa.org
Cc: Maxime Ripard <maxime.ripard@...e-electrons.com>,
Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
linux-kernel@...r.kernel.org,
Boris Brezillon <boris.brezillon@...e-electrons.com>
Subject: [PATCH] xtensa: Fix mmap() support
The xtensa architecture does not implement the dma_map_ops->mmap() hook,
thus relying on the default dma_common_mmap() implementation.
This implementation is only safe on DMA-coherent architecture (hence the
!defined(CONFIG_ARCH_NO_COHERENT_DMA_MMAP) condition), but xtensa does not
fall in this case.
This lead to bad pfn calculation when someone tries to mmap() one or
several pages that are not part of the identity mapping because the
dma_common_mmap() extract the pfn value from the virt address using
virt_to_page() which is only applicable on DMA-coherent platforms (on
other platforms, DMA coherent pages are mapped in a different region).
Implement xtensa_dma_mmap() (loosely based on __arm_dma_mmap()) in which
pfn is extracted from the DMA address using PFN_DOWN().
While we're at it, select ARCH_NO_COHERENT_DMA_MMAP from the XTENSA
entry so that we never silently fallback to dma_common_mmap() if someone
decides to drop the xtensa_dma_mmap() implementation.
Signed-off-by: Boris Brezillon <boris.brezillon@...e-electrons.com>
---
Hello,
This bug has been detected while developping a DRM driver on an FPGA
containing an Xtensa CPU. The DRM driver is using the generic CMA GEM
implementation which is allocating DMA coherent buffers in kernel space
and then allows userspace programs to mmap() these buffers.
Whith the existing implementation, the userspace pointer was pointing
to a completely different physical region, thus leading to bad display
output and memory corruptions.
I'm not sure the xtensa_dma_mmap() implementation is correct, but it
seems to solve my problem.
Don't hesitate to propose a different implementation.
Thanks,
Boris
---
arch/xtensa/Kconfig | 1 +
arch/xtensa/kernel/pci-dma.c | 23 +++++++++++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index f4126cf997a4..2e672a5e9fdf 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -3,6 +3,7 @@ config ZONE_DMA
config XTENSA
def_bool y
+ select ARCH_NO_COHERENT_DMA_MMAP
select ARCH_WANT_FRAME_POINTERS
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT
diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c
index 70e362e6038e..8f3ef49ceba6 100644
--- a/arch/xtensa/kernel/pci-dma.c
+++ b/arch/xtensa/kernel/pci-dma.c
@@ -249,9 +249,32 @@ int xtensa_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
return 0;
}
+static int xtensa_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ unsigned long attrs)
+{
+ int ret = -ENXIO;
+#ifdef CONFIG_MMU
+ unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ unsigned long pfn = PFN_DOWN(dma_addr);
+ unsigned long off = vma->vm_pgoff;
+
+ if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
+ return ret;
+
+ if (off < nr_pages && nr_vma_pages <= (nr_pages - off))
+ ret = remap_pfn_range(vma, vma->vm_start, pfn + off,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
+#endif /* CONFIG_MMU */
+ return ret;
+}
+
struct dma_map_ops xtensa_dma_map_ops = {
.alloc = xtensa_dma_alloc,
.free = xtensa_dma_free,
+ .mmap = xtensa_dma_mmap,
.map_page = xtensa_map_page,
.unmap_page = xtensa_unmap_page,
.map_sg = xtensa_map_sg,
--
2.7.4
Powered by blists - more mailing lists