[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1502974596-23835-5-git-send-email-joro@8bytes.org>
Date: Thu, 17 Aug 2017 14:56:27 +0200
From: Joerg Roedel <joro@...tes.org>
To: iommu@...ts.linux-foundation.org
Cc: linux-kernel@...r.kernel.org,
Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
Joerg Roedel <jroedel@...e.de>,
Robin Murphy <robin.murphy@....com>,
Will Deacon <will.deacon@....com>,
Nate Watterson <nwatters@...eaurora.org>,
Eric Auger <eric.auger@...hat.com>,
Mitchel Humpherys <mitchelh@...eaurora.org>
Subject: [PATCH 04/13] iommu/dma: Use sychronized interface of the IOMMU-API
From: Joerg Roedel <jroedel@...e.de>
The map and unmap functions of the IOMMU-API changed their
semantics: They do no longer guarantee that the hardware
TLBs are synchronized with the page-table updates they made.
To make conversion easier, new synchronized functions have
been introduced which give these guarantees again until the
code is converted to use the new TLB-flush interface of the
IOMMU-API, which allows certain optimizations.
But for now, just convert this code to use the synchronized
functions so that it will behave as before.
Cc: Robin Murphy <robin.murphy@....com>
Cc: Will Deacon <will.deacon@....com>
Cc: Nate Watterson <nwatters@...eaurora.org>
Cc: Eric Auger <eric.auger@...hat.com>
Cc: Mitchel Humpherys <mitchelh@...eaurora.org>
Signed-off-by: Joerg Roedel <jroedel@...e.de>
---
drivers/iommu/dma-iommu.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 9d1cebe..38c41a2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -417,7 +417,7 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr,
dma_addr -= iova_off;
size = iova_align(iovad, size + iova_off);
- WARN_ON(iommu_unmap(domain, dma_addr, size) != size);
+ WARN_ON(iommu_unmap_sync(domain, dma_addr, size) != size);
iommu_dma_free_iova(cookie, dma_addr, size);
}
@@ -572,7 +572,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
sg_miter_stop(&miter);
}
- if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot)
+ if (iommu_map_sg_sync(domain, iova, sgt.sgl, sgt.orig_nents, prot)
< size)
goto out_free_sg;
@@ -631,7 +631,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
if (!iova)
return IOMMU_MAPPING_ERROR;
- if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
+ if (iommu_map_sync(domain, iova, phys - iova_off, size, prot)) {
iommu_dma_free_iova(cookie, iova, size);
return IOMMU_MAPPING_ERROR;
}
@@ -791,7 +791,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
* We'll leave any physical concatenation to the IOMMU driver's
* implementation - it knows better than we do.
*/
- if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len)
+ if (iommu_map_sg_sync(domain, iova, sg, nents, prot) < iova_len)
goto out_free_iova;
return __finalise_sg(dev, sg, nents, iova);
--
2.7.4
Powered by blists - more mailing lists