[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210713094151.652597-7-namit@vmware.com>
Date: Tue, 13 Jul 2021 02:41:50 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Joerg Roedel <joro@...tes.org>
Cc: John Garry <john.garry@...wei.com>, Nadav Amit <namit@...are.com>,
Will Deacon <will@...nel.org>,
Jiajun Cao <caojiajun@...are.com>,
Robin Murphy <robin.murphy@....com>,
Lu Baolu <baolu.lu@...ux.intel.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: [PATCH v5 6/7] iommu/amd: Sync once for scatter-gather operations
From: Nadav Amit <namit@...are.com>
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through iommu_ops callback. As
the result, a single sg mapping may lead to multiple IOTLB flushes.
Fix this by adding amd_iotlb_sync_map() callback and flushing at this
point after all sg mappings we set.
This commit is followed and inspired by commit 933fcd01e97e2
("iommu/vt-d: Add iotlb_sync_map callback").
Cc: Joerg Roedel <joro@...tes.org>
Cc: Will Deacon <will@...nel.org>
Cc: Jiajun Cao <caojiajun@...are.com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: iommu@...ts.linux-foundation.org
Cc: linux-kernel@...r.kernel.org
Signed-off-by: Nadav Amit <namit@...are.com>
---
drivers/iommu/amd/iommu.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index cc55c4c6a355..c1fcd01b3c40 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2022,6 +2022,16 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
return ret;
}
+static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
+ unsigned long iova, size_t size)
+{
+ struct protection_domain *domain = to_pdomain(dom);
+ struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+
+ if (ops->map)
+ domain_flush_np_cache(domain, iova, size);
+}
+
static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
phys_addr_t paddr, size_t page_size, int iommu_prot,
gfp_t gfp)
@@ -2040,10 +2050,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
if (iommu_prot & IOMMU_WRITE)
prot |= IOMMU_PROT_IW;
- if (ops->map) {
+ if (ops->map)
ret = ops->map(ops, iova, paddr, page_size, prot, gfp);
- domain_flush_np_cache(domain, iova, page_size);
- }
return ret;
}
@@ -2223,6 +2231,7 @@ const struct iommu_ops amd_iommu_ops = {
.attach_dev = amd_iommu_attach_device,
.detach_dev = amd_iommu_detach_device,
.map = amd_iommu_map,
+ .iotlb_sync_map = amd_iommu_iotlb_sync_map,
.unmap = amd_iommu_unmap,
.iova_to_phys = amd_iommu_iova_to_phys,
.probe_device = amd_iommu_probe_device,
--
2.25.1
Powered by blists - more mailing lists