[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4449393d-f631-52f9-e98a-eb056a22b870@linux.intel.com>
Date: Wed, 1 Jul 2020 09:11:21 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...ux.intel.com>,
iommu@...ts.linux-foundation.org,
LKML <linux-kernel@...r.kernel.org>,
Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>
Cc: baolu.lu@...ux.intel.com, Yi Liu <yi.l.liu@...el.com>,
"Tian, Kevin" <kevin.tian@...el.com>,
Raj Ashok <ashok.raj@...el.com>,
Eric Auger <eric.auger@...hat.com>
Subject: Re: [PATCH v2 5/7] iommu/vt-d: Fix devTLB flush for vSVA
On 7/1/20 5:07 AM, Jacob Pan wrote:
> From: Liu Yi L <yi.l.liu@...el.com>
>
> For guest SVA usage, in order to optimize for less VMEXIT, guest request
> of IOTLB flush also includes device TLB.
>
> On the host side, IOMMU driver performs IOTLB and implicit devTLB
> invalidation. When PASID-selective granularity is requested by the guest
> we need to derive the equivalent address range for devTLB instead of
> using the address information in the UAPI data. The reason for that is, unlike
> IOTLB flush, devTLB flush does not support PASID-selective granularity.
> This is to say, we need to set the following in the PASID based devTLB
> invalidation descriptor:
> - entire 64 bit range in address ~(0x1 << 63)
> - S bit = 1 (VT-d CH 6.5.2.6).
>
> Without this fix, device TLB flush range is not set properly for PASID
> selective granularity. This patch also merged devTLB flush code for both
> implicit and explicit cases.
>
> Fixes: 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function")
> Signed-off-by: Liu Yi L <yi.l.liu@...el.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Acked-by: Lu Baolu <baolu.lu@...ux.intel.com>
Best regards,
baolu
> ---
> drivers/iommu/intel/iommu.c | 28 ++++++++++++++++++----------
> 1 file changed, 18 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 96340da57075..6a0c62c7395c 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5408,7 +5408,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
> sid = PCI_DEVID(bus, devfn);
>
> /* Size is only valid in address selective invalidation */
> - if (inv_info->granularity != IOMMU_INV_GRANU_PASID)
> + if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
> size = to_vtd_size(inv_info->addr_info.granule_size,
> inv_info->addr_info.nb_granules);
>
> @@ -5417,6 +5417,7 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
> IOMMU_CACHE_INV_TYPE_NR) {
> int granu = 0;
> u64 pasid = 0;
> + u64 addr = 0;
>
> granu = to_vtd_granularity(cache_type, inv_info->granularity);
> if (granu == -EINVAL) {
> @@ -5456,24 +5457,31 @@ intel_iommu_sva_invalidate(struct iommu_domain *domain, struct device *dev,
> (granu == QI_GRAN_NONG_PASID) ? -1 : 1 << size,
> inv_info->addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
>
> + if (!info->ats_enabled)
> + break;
> /*
> * Always flush device IOTLB if ATS is enabled. vIOMMU
> * in the guest may assume IOTLB flush is inclusive,
> * which is more efficient.
> */
> - if (info->ats_enabled)
> - qi_flush_dev_iotlb_pasid(iommu, sid,
> - info->pfsid, pasid,
> - info->ats_qdep,
> - inv_info->addr_info.addr,
> - size);
> - break;
> + fallthrough;
> case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
> + /*
> + * There is no PASID selective flush for device TLB, so
> + * the equivalent of that is we set the size to be the
> + * entire range of 64 bit. User only provides PASID info
> + * without address info. So we set addr to 0.
> + */
> + if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
> + size = 64 - VTD_PAGE_SHIFT;
> + addr = 0;
> + } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
> + addr = inv_info->addr_info.addr;
> +
> if (info->ats_enabled)
> qi_flush_dev_iotlb_pasid(iommu, sid,
> info->pfsid, pasid,
> - info->ats_qdep,
> - inv_info->addr_info.addr,
> + info->ats_qdep, addr,
> size);
> else
> pr_warn_ratelimited("Passdown device IOTLB flush w/o ATS!\n");
>
Powered by blists - more mailing lists