[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210811024247.1144246-6-stevensd@google.com>
Date: Wed, 11 Aug 2021 11:42:47 +0900
From: David Stevens <stevensd@...omium.org>
To: Robin Murphy <robin.murphy@....com>, Will Deacon <will@...nel.org>
Cc: Joerg Roedel <joro@...tes.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Tom Murphy <murphyt7@....ie>, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, David Stevens <stevensd@...omium.org>
Subject: [PATCH v3 5/5] dma-iommu: account for min_align_mask
From: David Stevens <stevensd@...omium.org>
For devices which set min_align_mask, swiotlb preserves the offset of
the original physical address within that mask. Since __iommu_dma_map
accounts for non-aligned addresses, passing a non-aligned swiotlb
address with the swiotlb aligned size results in the offset being
accounted for twice in the size passed to iommu_map_atomic. The extra
page exposed to DMA is also not cleaned up by __iommu_dma_unmap, since
tht at function unmaps with the correct size. This causes mapping failures
if the iova gets reused, due to collisions in the iommu page tables.
To fix this, pass the original size to __iommu_dma_map, since that
function already handles alignment.
Additionally, when swiotlb returns non-aligned addresses, there is
padding at the start of the bounce buffer that needs to be cleared.
Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
Signed-off-by: David Stevens <stevensd@...omium.org>
---
drivers/iommu/dma-iommu.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 89b689bf801f..ffa7e8ef5db4 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -549,9 +549,8 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
- size_t aligned_size = org_size;
- void *padding_start;
- size_t padding_size;
+ void *tlb_start;
+ size_t aligned_size, iova_off, mapping_end_off;
dma_addr_t iova;
/*
@@ -566,24 +565,26 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
if (phys == DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;
- /* Cleanup the padding area. */
- padding_start = phys_to_virt(phys);
- padding_size = aligned_size;
+ iova_off = iova_offset(iovad, phys);
+ tlb_start = phys_to_virt(phys - iova_off);
+ /* Cleanup the padding area. */
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
(dir == DMA_TO_DEVICE ||
dir == DMA_BIDIRECTIONAL)) {
- padding_start += org_size;
- padding_size -= org_size;
+ mapping_end_off = iova_off + org_size;
+ memset(tlb_start, 0, iova_off);
+ memset(tlb_start + mapping_end_off, 0,
+ aligned_size - mapping_end_off);
+ } else {
+ memset(tlb_start, 0, aligned_size);
}
-
- memset(padding_start, 0, padding_size);
}
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(phys, org_size, dir);
- iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
+ iova = __iommu_dma_map(dev, phys, org_size, prot, dma_mask);
if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
return iova;
--
2.32.0.605.g8dce9f2422-goog
Powered by blists - more mailing lists