[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b4fd891-a86a-42cd-5b69-bc08d351dd3a@arm.com>
Date: Wed, 11 Aug 2021 19:47:21 +0100
From: Robin Murphy <robin.murphy@....com>
To: David Stevens <stevensd@...omium.org>,
Will Deacon <will@...nel.org>
Cc: Joerg Roedel <joro@...tes.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Tom Murphy <murphyt7@....ie>, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/5] dma-iommu: fix arch_sync_dma for map
On 2021-08-11 03:42, David Stevens wrote:
> From: David Stevens <stevensd@...omium.org>
>
> When calling arch_sync_dma, we need to pass it the memory that's
> actually being used for dma. When using swiotlb bounce buffers, this is
> the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb
> helper, so it can use the bounce buffer address if necessary. This also
> means it is no longer necessary to call iommu_dma_sync_sg_for_device in
> iommu_dma_map_sg for untrusted devices.
>
> Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
> Signed-off-by: David Stevens <stevensd@...omium.org>
> ---
> drivers/iommu/dma-iommu.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 54e103b989d9..4f0cc4a0a61f 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -576,6 +576,9 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
> memset(padding_start, 0, padding_size);
> }
>
> + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
Make that an "else if" - otherwise you're just reintroducing the same
thing that the third hunk is trying to clean up.
> + arch_sync_dma_for_device(phys, org_size, dir);
> +
> iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
> if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
> swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
> @@ -848,14 +851,9 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
> {
> phys_addr_t phys = page_to_phys(page) + offset;
> bool coherent = dev_is_dma_coherent(dev);
> - dma_addr_t dma_handle;
>
> - dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
> + return __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
> coherent, dir, attrs);
Just fold __iommu_dma_map_swiotlb() back into here and have
iommu_dma_map_sg_swiotlb() call iommu_dma_map_page() in the typical
pattern of dma-direct and others. Apparently the only purpose served by
that indirection was allowing these bugs to exist...
Robin.
> - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
> - dma_handle != DMA_MAPPING_ERROR)
> - arch_sync_dma_for_device(phys, size, dir);
> - return dma_handle;
> }
>
> static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
> @@ -998,12 +996,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
> iommu_deferred_attach(dev, domain))
> return 0;
>
> - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> - iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
> -
> if (dev_is_untrusted(dev))
> return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);
>
> + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> + iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
> +
> /*
> * Work out how much IOVA space we need, and align the segments to
> * IOVA granules for the IOMMU driver to handle. With some clever
>
Powered by blists - more mailing lists