[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210830045925.4163412-8-stevensd@google.com>
Date: Mon, 30 Aug 2021 13:59:25 +0900
From: David Stevens <stevensd@...omium.org>
To: Robin Murphy <robin.murphy@....com>, Christoph Hellwig <hch@....de>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Tom Murphy <murphyt7@....ie>, Rajat Jain <rajatja@...gle.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
David Stevens <stevensd@...omium.org>
Subject: [PATCH v7 7/7] dma-iommu: account for min_align_mask w/swiotlb
From: David Stevens <stevensd@...omium.org>
Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce
buffers in iommu_dma_map_page, to account for min_align_mask.
To deal with granule alignment, __iommu_dma_map maps iova_align(size +
iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page
passes aligned size when using swiotlb, then this becomes
iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be
zero when using swiotlb. However, this is not the case for devices that
set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up
mapping an extra page at the end of the buffer. Beyond just being a
security issue, the extra page is not cleaned up by __iommu_dma_unmap.
This causes problems when the IOVA is reused, due to collisions in the
iommu driver. Just passing the original size is sufficient, since
__iommu_dma_map will take care of granule alignment.
Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
Signed-off-by: David Stevens <stevensd@...omium.org>
---
drivers/iommu/dma-iommu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 9b8c17c3d29b..addcaa09db12 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -805,7 +805,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
- size_t aligned_size = size;
dma_addr_t iova, dma_mask = dma_get_mask(dev);
/*
@@ -814,7 +813,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
*/
if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
void *padding_start;
- size_t padding_size;
+ size_t padding_size, aligned_size;
aligned_size = iova_align(iovad, size);
phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
@@ -839,7 +838,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(phys, size, dir);
- iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
+ iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
return iova;
--
2.33.0.259.gc128427fd7-goog
Powered by blists - more mailing lists