[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1502974596-23835-7-git-send-email-joro@8bytes.org>
Date: Thu, 17 Aug 2017 14:56:29 +0200
From: Joerg Roedel <joro@...tes.org>
To: iommu@...ts.linux-foundation.org
Cc: linux-kernel@...r.kernel.org,
Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
Joerg Roedel <jroedel@...e.de>,
Lucas Stach <l.stach@...gutronix.de>,
Russell King <linux+etnaviv@...linux.org.uk>,
Christian Gmeiner <christian.gmeiner@...il.com>,
David Airlie <airlied@...ux.ie>, etnaviv@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org
Subject: [PATCH 06/13] drm/etnaviv: Use sychronized interface of the IOMMU-API
From: Joerg Roedel <jroedel@...e.de>
The map and unmap functions of the IOMMU-API changed their
semantics: They do no longer guarantee that the hardware
TLBs are synchronized with the page-table updates they made.
To make conversion easier, new synchronized functions have
been introduced which give these guarantees again until the
code is converted to use the new TLB-flush interface of the
IOMMU-API, which allows certain optimizations.
But for now, just convert this code to use the synchronized
functions so that it will behave as before.
Cc: Lucas Stach <l.stach@...gutronix.de>
Cc: Russell King <linux+etnaviv@...linux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@...il.com>
Cc: David Airlie <airlied@...ux.ie>
Cc: etnaviv@...ts.freedesktop.org
Cc: dri-devel@...ts.freedesktop.org
Signed-off-by: Joerg Roedel <jroedel@...e.de>
---
drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index f103e78..ae0247c 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -47,7 +47,7 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova,
VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes);
- ret = iommu_map(domain, da, pa, bytes, prot);
+ ret = iommu_map_sync(domain, da, pa, bytes, prot);
if (ret)
goto fail;
@@ -62,7 +62,7 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova,
for_each_sg(sgt->sgl, sg, i, j) {
size_t bytes = sg_dma_len(sg) + sg->offset;
- iommu_unmap(domain, da, bytes);
+ iommu_unmap_sync(domain, da, bytes);
da += bytes;
}
return ret;
@@ -80,7 +80,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, u32 iova,
size_t bytes = sg_dma_len(sg) + sg->offset;
size_t unmapped;
- unmapped = iommu_unmap(domain, da, bytes);
+ unmapped = iommu_unmap_sync(domain, da, bytes);
if (unmapped < bytes)
return unmapped;
@@ -338,7 +338,7 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_gpu *gpu, dma_addr_t paddr,
mutex_unlock(&mmu->lock);
return ret;
}
- ret = iommu_map(mmu->domain, vram_node->start, paddr, size,
+ ret = iommu_map_sync(mmu->domain, vram_node->start, paddr, size,
IOMMU_READ);
if (ret < 0) {
drm_mm_remove_node(vram_node);
@@ -362,7 +362,7 @@ void etnaviv_iommu_put_suballoc_va(struct etnaviv_gpu *gpu,
if (mmu->version == ETNAVIV_IOMMU_V2) {
mutex_lock(&mmu->lock);
- iommu_unmap(mmu->domain,iova, size);
+ iommu_unmap_sync(mmu->domain,iova, size);
drm_mm_remove_node(vram_node);
mutex_unlock(&mmu->lock);
}
--
2.7.4
Powered by blists - more mailing lists