[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191008221837.13067-2-logang@deltatee.com>
Date: Tue, 8 Oct 2019 16:18:35 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
Joerg Roedel <joro@...tes.org>
Cc: Kit Chow <kchow@...aio.com>, Logan Gunthorpe <logang@...tatee.com>
Subject: [PATCH 1/3] iommu/amd: Implement dma_[un]map_resource()
From: Kit Chow <kchow@...aio.com>
Currently the Intel IOMMU uses the default dma_[un]map_resource()
implementations does nothing and simply returns the physical address
unmodified.
However, this doesn't create the IOVA entries necessary for addresses
mapped this way to work when the IOMMU is enabled. Thus, when the
IOMMU is enabled, drivers relying on dma_map_resource() will not get the
propper mapping. We see this when running ntb_transport with the IOMMU
enabled, DMA, and switchtec hardware.
The implementation for the amd version of map_resource() is nearly
identical to map_page(), just with a phys_addr passed instead of a page.
dma_unmap_resource() uses unmap_page() directly as the functions are
identical.
Signed-off-by: Kit Chow <kchow@...aio.com>
[logang@...tatee.com: Cleaned up into a propper commit and wrote the
commit message]
Signed-off-by: Logan Gunthorpe <logang@...tatee.com>
---
drivers/iommu/amd_iommu.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 2369b8af81f3..aa3d9e705a45 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2553,6 +2553,23 @@ static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
__unmap_single(dma_dom, dma_addr, size, dir);
}
+static dma_addr_t map_resource(struct device *dev, phys_addr_t paddr,
+ size_t size, enum dma_data_direction dir, unsigned long attrs)
+{
+ struct protection_domain *domain;
+ struct dma_ops_domain *dma_dom;
+
+ domain = get_domain(dev);
+ if (PTR_ERR(domain) == -EINVAL)
+ return (dma_addr_t)paddr;
+ else if (IS_ERR(domain))
+ return DMA_MAPPING_ERROR;
+
+ dma_dom = to_dma_ops_domain(domain);
+
+ return __map_single(dev, dma_dom, paddr, size, dir, *dev->dma_mask);
+}
+
static int sg_num_pages(struct device *dev,
struct scatterlist *sglist,
int nelems)
@@ -2795,6 +2812,8 @@ static const struct dma_map_ops amd_iommu_dma_ops = {
.unmap_page = unmap_page,
.map_sg = map_sg,
.unmap_sg = unmap_sg,
+ .map_resource = map_resource,
+ .unmap_resource = unmap_page,
.dma_supported = amd_iommu_dma_supported,
.mmap = dma_common_mmap,
.get_sgtable = dma_common_get_sgtable,
--
2.20.1
Powered by blists - more mailing lists