[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99671205-134d-7563-63e2-b65c13d5d074@arm.com>
Date: Tue, 15 Jun 2021 12:25:11 +0100
From: Robin Murphy <robin.murphy@....com>
To: Nadav Amit <nadav.amit@...il.com>, Joerg Roedel <joro@...tes.org>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
Nadav Amit <namit@...are.com>,
Jiajun Cao <caojiajun@...are.com>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH v3 6/6] iommu/amd: Sync once for scatter-gather operations
On 2021-06-07 19:25, Nadav Amit wrote:
> From: Nadav Amit <namit@...are.com>
>
> On virtual machines, software must flush the IOTLB after each page table
> entry update.
>
> The iommu_map_sg() code iterates through the given scatter-gather list
> and invokes iommu_map() for each element in the scatter-gather list,
> which calls into the vendor IOMMU driver through iommu_ops callback. As
> the result, a single sg mapping may lead to multiple IOTLB flushes.
>
> Fix this by adding amd_iotlb_sync_map() callback and flushing at this
> point after all sg mappings we set.
>
> This commit is followed and inspired by commit 933fcd01e97e2
> ("iommu/vt-d: Add iotlb_sync_map callback").
>
> Cc: Joerg Roedel <joro@...tes.org>
> Cc: Will Deacon <will@...nel.org>
> Cc: Jiajun Cao <caojiajun@...are.com>
> Cc: Robin Murphy <robin.murphy@....com>
> Cc: Lu Baolu <baolu.lu@...ux.intel.com>
> Cc: iommu@...ts.linux-foundation.org
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Nadav Amit <namit@...are.com>
> ---
> drivers/iommu/amd/iommu.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 128f2e889ced..dd23566f1db8 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -2027,6 +2027,16 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
> return ret;
> }
>
> +static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
> + unsigned long iova, size_t size)
> +{
> + struct protection_domain *domain = to_pdomain(dom);
> + struct io_pgtable_ops *ops = &domain->iop.iop.ops;
> +
> + if (ops->map)
Not too critical since you're only moving existing code around, but is
ops->map ever not set? Either way the check ends up looking rather
out-of-place here :/
It's not very clear what the original intent was - I do wonder whether
it's supposed to be related to PAGE_MODE_NONE, but given that
amd_iommu_map() has an explicit check and errors out early in that case,
we'd never get here anyway. Possibly something to come back and clean up
later?
Robin.
> + domain_flush_np_cache(domain, iova, size);
> +}
> +
> static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
> phys_addr_t paddr, size_t page_size, int iommu_prot,
> gfp_t gfp)
> @@ -2045,10 +2055,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
> if (iommu_prot & IOMMU_WRITE)
> prot |= IOMMU_PROT_IW;
>
> - if (ops->map) {
> + if (ops->map)
> ret = ops->map(ops, iova, paddr, page_size, prot, gfp);
> - domain_flush_np_cache(domain, iova, page_size);
> - }
>
> return ret;
> }
> @@ -2249,6 +2257,7 @@ const struct iommu_ops amd_iommu_ops = {
> .attach_dev = amd_iommu_attach_device,
> .detach_dev = amd_iommu_detach_device,
> .map = amd_iommu_map,
> + .iotlb_sync_map = amd_iommu_iotlb_sync_map,
> .unmap = amd_iommu_unmap,
> .iova_to_phys = amd_iommu_iova_to_phys,
> .probe_device = amd_iommu_probe_device,
>
Powered by blists - more mailing lists