[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f64b1349-d271-7b57-0eea-276dda065a87@arm.com>
Date: Thu, 19 Aug 2021 10:03:09 +0100
From: Robin Murphy <robin.murphy@....com>
To: David Stevens <stevensd@...omium.org>,
Christoph Hellwig <hch@....de>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Tom Murphy <murphyt7@....ie>, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 7/7] dma-iommu: account for min_align_mask
On 2021-08-17 02:38, David Stevens wrote:
> From: David Stevens <stevensd@...omium.org>
>
> For devices which set min_align_mask, swiotlb preserves the offset of
> the original physical address within that mask. Since __iommu_dma_map
> accounts for non-aligned addresses, passing a non-aligned swiotlb
> address with the swiotlb aligned size results in the offset being
> accounted for twice in the size passed to iommu_map_atomic. The extra
> page exposed to DMA is also not cleaned up by __iommu_dma_unmap, since
> that function unmaps with the correct size. This causes mapping failures
> if the iova gets reused, due to collisions in the iommu page tables.
>
> To fix this, pass the original size to __iommu_dma_map, since that
> function already handles alignment.
>
> Additionally, when swiotlb returns non-aligned addresses, there is
> padding at the start of the bounce buffer that needs to be cleared.
>
> Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
> Signed-off-by: David Stevens <stevensd@...omium.org>
> ---
> drivers/iommu/dma-iommu.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 6738420fc081..f2fb360c2907 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -788,7 +788,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> struct iommu_domain *domain = iommu_get_dma_domain(dev);
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> struct iova_domain *iovad = &cookie->iovad;
> - size_t aligned_size = size;
> dma_addr_t iova, dma_mask = dma_get_mask(dev);
>
> /*
> @@ -796,8 +795,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> * page aligned, we don't need to use a bounce page.
> */
> if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
> - void *padding_start;
> - size_t padding_size;
> + void *tlb_start;
> + size_t aligned_size, iova_off, mapping_end_off;
>
> aligned_size = iova_align(iovad, size);
> phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
> @@ -806,23 +805,26 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> if (phys == DMA_MAPPING_ERROR)
> return DMA_MAPPING_ERROR;
>
> - /* Cleanup the padding area. */
> - padding_start = phys_to_virt(phys);
> - padding_size = aligned_size;
> + iova_off = iova_offset(iovad, phys);
> + tlb_start = phys_to_virt(phys - iova_off);
>
> if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) {
> - padding_start += size;
> - padding_size -= size;
> + /* Cleanup the padding area. */
> + mapping_end_off = iova_off + size;
> + memset(tlb_start, 0, iova_off);
> + memset(tlb_start + mapping_end_off, 0,
> + aligned_size - mapping_end_off);
> + } else {
> + /* Nothing was sync'ed, so clear the whole buffer. */
> + memset(tlb_start, 0, aligned_size);
> }
> -
> - memset(padding_start, 0, padding_size);
> }
>
> if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> arch_sync_dma_for_device(phys, size, dir);
>
> - iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
> + iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
I still don't see how this preserves min_align_mask if it is larger than
the IOVA granule (either way this change here does nothing since the
first thing __iommu_dma_map() does is iova_align() the size right back
anyway).
Robin.
> if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
> swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> return iova;
>
Powered by blists - more mailing lists