[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9396eef-26b9-5280-1250-4aaeb0a38c32@amd.com>
Date: Mon, 5 Mar 2018 08:08:21 +0700
From: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
To: iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Cc: joro@...tes.org, jroedel@...e.de, alex.williamson@...hat.com
Subject: Re: [PATCH v4] iommu/amd: Add support for fast IOTLB flushing
Ping..
Joerg, when you get a chance, would you please let me know if you have any other concerns for this v4.
Thanks,
Suravee
On 2/21/18 2:19 PM, Suravee Suthikulpanit wrote:
> Since AMD IOMMU driver currently flushes all TLB entries
> when page size is more than one, use the same interface
> for both iommu_ops.flush_iotlb_all() and iommu_ops.iotlb_sync().
>
> Cc: Joerg Roedel <joro@...tes.org>
> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
> ---
> Changes from v3 (https://patchwork.kernel.org/patch/10193235)
> * Change amd_iommu_iotlb_range_add() to no-op and iotlb_sync()
> to full domain flush for now since we currently flush all entries
> when the page size is more than one.
> * Fine-grained invalidation will be introduced in subsequent
> patch series.
>
> drivers/iommu/amd_iommu.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index fed8059..6061a8d 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -3043,9 +3043,6 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
> unmap_size = iommu_unmap_page(domain, iova, page_size);
> mutex_unlock(&domain->api_lock);
>
> - domain_flush_tlb_pde(domain);
> - domain_flush_complete(domain);
> -
> return unmap_size;
> }
>
> @@ -3163,6 +3160,19 @@ static bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
> return dev_data->defer_attach;
> }
>
> +static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
> +{
> + struct protection_domain *dom = to_pdomain(domain);
> +
> + domain_flush_tlb_pde(dom);
> + domain_flush_complete(dom);
> +}
> +
> +static void amd_iommu_iotlb_range_add(struct iommu_domain *domain,
> + unsigned long iova, size_t size)
> +{
> +}
> +
> const struct iommu_ops amd_iommu_ops = {
> .capable = amd_iommu_capable,
> .domain_alloc = amd_iommu_domain_alloc,
> @@ -3181,6 +3191,9 @@ static bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
> .apply_resv_region = amd_iommu_apply_resv_region,
> .is_attach_deferred = amd_iommu_is_attach_deferred,
> .pgsize_bitmap = AMD_IOMMU_PGSIZES,
> + .flush_iotlb_all = amd_iommu_flush_iotlb_all,
> + .iotlb_range_add = amd_iommu_iotlb_range_add,
> + .iotlb_sync = amd_iommu_flush_iotlb_all,
> };
>
> /*****************************************************************************
>
Powered by blists - more mailing lists