[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09c04e0428f422c1b13d2b054af16e719de318a3.1754292567.git.leon@kernel.org>
Date: Mon, 4 Aug 2025 15:42:40 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Leon Romanovsky <leonro@...dia.com>,
Jason Gunthorpe <jgg@...dia.com>,
Abdiel Janulgue <abdiel.janulgue@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Alex Gaynor <alex.gaynor@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@....de>,
Danilo Krummrich <dakr@...nel.org>,
iommu@...ts.linux.dev,
Jason Wang <jasowang@...hat.com>,
Jens Axboe <axboe@...nel.dk>,
Joerg Roedel <joro@...tes.org>,
Jonathan Corbet <corbet@....net>,
Juergen Gross <jgross@...e.com>,
kasan-dev@...glegroups.com,
Keith Busch <kbusch@...nel.org>,
linux-block@...r.kernel.org,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-nvme@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org,
linux-trace-kernel@...r.kernel.org,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
"Michael S. Tsirkin" <mst@...hat.com>,
Miguel Ojeda <ojeda@...nel.org>,
Robin Murphy <robin.murphy@....com>,
rust-for-linux@...r.kernel.org,
Sagi Grimberg <sagi@...mberg.me>,
Stefano Stabellini <sstabellini@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
virtualization@...ts.linux.dev,
Will Deacon <will@...nel.org>,
xen-devel@...ts.xenproject.org
Subject: [PATCH v1 06/16] iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory
From: Leon Romanovsky <leonro@...dia.com>
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
Signed-off-by: Leon Romanovsky <leonro@...dia.com>
---
drivers/iommu/dma-iommu.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 11c5d5f8c0981..0a19ce50938b3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1193,12 +1193,17 @@ static inline size_t iova_unaligned(struct iova_domain *iovad, phys_addr_t phys,
dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
enum dma_data_direction dir, unsigned long attrs)
{
- bool coherent = dev_is_dma_coherent(dev);
- int prot = dma_info_to_prot(dir, coherent, attrs);
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
dma_addr_t iova, dma_mask = dma_get_mask(dev);
+ bool coherent;
+ int prot;
+
+ if (attrs & DMA_ATTR_MMIO)
+ return __iommu_dma_map(dev, phys, size,
+ dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
+ dma_get_mask(dev));
/*
* If both the physical buffer start address and size are page aligned,
@@ -1211,6 +1216,9 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
return DMA_MAPPING_ERROR;
}
+ coherent = dev_is_dma_coherent(dev);
+ prot = dma_info_to_prot(dir, coherent, attrs);
+
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(phys, size, dir);
@@ -1223,10 +1231,14 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
- struct iommu_domain *domain = iommu_get_dma_domain(dev);
phys_addr_t phys;
- phys = iommu_iova_to_phys(domain, dma_handle);
+ if (attrs & DMA_ATTR_MMIO) {
+ __iommu_dma_unmap(dev, dma_handle, size);
+ return;
+ }
+
+ phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
if (WARN_ON(!phys))
return;
--
2.50.1
Powered by blists - more mailing lists