[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240701172319.264e718c@jacob-builder>
Date: Mon, 1 Jul 2024 17:23:19 -0700
From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>, Robin
Murphy <robin.murphy@....com>, Jason Gunthorpe <jgg@...pe.ca>, Kevin Tian
<kevin.tian@...el.com>, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, jacob.jun.pan@...ux.intel.com
Subject: Re: [PATCH v2 1/2] iommu/vt-d: Add helper to flush caches for
context change
On Thu, 27 Jun 2024 10:31:20 +0800, Lu Baolu <baolu.lu@...ux.intel.com>
wrote:
> +/*
> + * Cache invalidations after change in a context table entry that was
> present
> + * according to the Spec 6.5.3.3 (Guidance to Software for
> Invalidations). If
> + * IOMMU is in scalable mode and all PASID table entries of the device
> were
> + * non-present, set affect_domains to true. Otherwise, false.
> + */
The spec says:
"Domain-selective PASID-cache invalidation to affected domains (can be
skipped if all PASID entries were not-present and CM=0)"
So we should skip PASID cache invalidation if affect_domain is true
according to this comment.
> +void intel_context_flush_present(struct device_domain_info *info,
> + struct context_entry *context,
> + bool affect_domains)
> +{
> + struct intel_iommu *iommu = info->iommu;
> + u16 did = context_domain_id(context);
> + struct pasid_entry *pte;
> + int i;
> +
> + assert_spin_locked(&iommu->lock);
> +
> + /*
> + * Device-selective context-cache invalidation. The Domain-ID
> field
> + * of the Context-cache Invalidate Descriptor is ignored by
> hardware
> + * when operating in scalable mode. Therefore the @did value
> doesn't
> + * matter in scalable mode.
> + */
> + iommu->flush.flush_context(iommu, did, PCI_DEVID(info->bus,
> info->devfn),
> + DMA_CCMD_MASK_NOBIT,
> DMA_CCMD_DEVICE_INVL); +
> + /*
> + * For legacy mode:
> + * - Domain-selective IOTLB invalidation
> + * - Global Device-TLB invalidation to all affected functions
> + */
> + if (!sm_supported(iommu)) {
> + iommu->flush.flush_iotlb(iommu, did, 0, 0,
> DMA_TLB_DSI_FLUSH);
> + __context_flush_dev_iotlb(info);
> +
> + return;
> + }
> +
> + /*
> + * For scalable mode:
> + * - Domain-selective PASID-cache invalidation to affected
> domains
> + * - Domain-selective IOTLB invalidation to affected domains
> + * - Global Device-TLB invalidation to affected functions
> + */
> + if (affect_domains) {
> + for (i = 0; i < info->pasid_table->max_pasid; i++) {
> + pte = intel_pasid_get_entry(info->dev, i);
> + if (!pte || !pasid_pte_is_present(pte))
> + continue;
> +
> + did = pasid_get_domain_id(pte);
> + qi_flush_pasid_cache(iommu, did,
> QI_PC_ALL_PASIDS, 0);
This is conflicting with the comments above where PASID cache flush can be
skipped if affect_domain==true, no?
> + iommu->flush.flush_iotlb(iommu, did, 0, 0,
> DMA_TLB_DSI_FLUSH);
> + }
> + }
Thanks,
Jacob
Powered by blists - more mailing lists