[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250502114043.1968976-3-john.ernberg@actia.se>
Date: Fri, 2 May 2025 11:40:56 +0000
From: John Ernberg <john.ernberg@...ia.se>
To: Juergen Gross <jgross@...e.com>, Stefano Stabellini
<sstabellini@...nel.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>, Catalin Marinas
<catalin.marinas@....com>, Andrew Morton <akpm@...ux-foundation.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"imx@...ts.linux.dev" <imx@...ts.linux.dev>, John Ernberg
<john.ernberg@...ia.se>
Subject: [PATCH 2/2] xen: swiotlb: Implement map_resource callback
Needed by the eDMA v3 DMA engine found in iommu-less SoCs such as iMX8QXP
to be able to perform DMA operations as a Xen Hardware Domain, which needs
to be able to do DMA in MMIO space.
The callback implementation is basically the same as the one for direct
mapping of resources, except this also takes into account Xen page
mappings.
There is nothing to do for unmap, just like with direct, so the unmap
callback is not needed.
Signed-off-by: John Ernberg <john.ernberg@...ia.se>
---
I originally exported dma_direct_map_resource() and used that which
appeared to work, but it felt like not checking Xen page mappings wasn't
fully correct and I went with this. But if dma_direct_map_resource() is
the correct approach here then I can submit that isntead.
---
drivers/xen/swiotlb-xen.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index ef56a2500ed6..0f02fdd9128d 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -285,6 +285,20 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
attrs, pool);
}
+static dma_addr_t xen_swiotlb_map_resource(struct device *dev, phys_addr_t phys,
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ dma_addr_t dev_addr = xen_phys_to_dma(dev, phys);
+
+ BUG_ON(dir == DMA_NONE);
+
+ if (!dma_capable(dev, dev_addr, size, false))
+ return DMA_MAPPING_ERROR;
+
+ return dev_addr;
+}
+
static void
xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir)
@@ -426,4 +440,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
.alloc_pages_op = dma_common_alloc_pages,
.free_pages = dma_common_free_pages,
.max_mapping_size = swiotlb_max_mapping_size,
+ .map_resource = xen_swiotlb_map_resource,
};
--
2.49.0
Powered by blists - more mailing lists