[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <80d727b4-c1eb-49d1-9b4a-ab3f0a4b54e2@linux.intel.com>
Date: Thu, 27 Jun 2024 16:21:45 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: "Tian, Kevin" <kevin.tian@...el.com>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
Jason Gunthorpe <jgg@...pe.ca>
Cc: baolu.lu@...ux.intel.com, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/2] iommu/vt-d: Add helper to flush caches for context
change
On 2024/6/27 14:08, Tian, Kevin wrote:
>> From: Lu Baolu <baolu.lu@...ux.intel.com>
>> Sent: Thursday, June 27, 2024 10:31 AM
>>
>> +/*
>> + * Cache invalidations after change in a context table entry that was present
>> + * according to the Spec 6.5.3.3 (Guidance to Software for Invalidations). If
>> + * IOMMU is in scalable mode and all PASID table entries of the device were
>> + * non-present, set affect_domains to true. Otherwise, false.
>
> if no PASID is present then the flag should be false.
>
> s/affect_domains/flush_domains/
Yes.
>
>> + */
>> +void intel_context_flush_present(struct device_domain_info *info,
>> + struct context_entry *context,
>> + bool affect_domains)
>> +{
>> + struct intel_iommu *iommu = info->iommu;
>> + u16 did = context_domain_id(context);
>> + struct pasid_entry *pte;
>> + int i;
>> +
>> + assert_spin_locked(&iommu->lock);
>> +
>> + /*
>> + * Device-selective context-cache invalidation. The Domain-ID field
>> + * of the Context-cache Invalidate Descriptor is ignored by hardware
>> + * when operating in scalable mode. Therefore the @did value
>> doesn't
>> + * matter in scalable mode.
>> + */
>> + iommu->flush.flush_context(iommu, did, PCI_DEVID(info->bus, info-
>>> devfn),
>> + DMA_CCMD_MASK_NOBIT,
>> DMA_CCMD_DEVICE_INVL);
>> +
>> + /*
>> + * For legacy mode:
>> + * - Domain-selective IOTLB invalidation
>> + * - Global Device-TLB invalidation to all affected functions
>> + */
>> + if (!sm_supported(iommu)) {
>> + iommu->flush.flush_iotlb(iommu, did, 0, 0,
>> DMA_TLB_DSI_FLUSH);
>> + __context_flush_dev_iotlb(info);
>> +
>> + return;
>> + }
>> +
>> + /*
>> + * For scalable mode:
>> + * - Domain-selective PASID-cache invalidation to affected domains
>> + * - Domain-selective IOTLB invalidation to affected domains
>> + * - Global Device-TLB invalidation to affected functions
>> + */
>> + if (affect_domains) {
>> + for (i = 0; i < info->pasid_table->max_pasid; i++) {
>> + pte = intel_pasid_get_entry(info->dev, i);
>> + if (!pte || !pasid_pte_is_present(pte))
>> + continue;
>> +
>> + did = pasid_get_domain_id(pte);
>> + qi_flush_pasid_cache(iommu, did,
>> QI_PC_ALL_PASIDS, 0);
>> + iommu->flush.flush_iotlb(iommu, did, 0, 0,
>> DMA_TLB_DSI_FLUSH);
>> + }
>> + }
>> +
>> + __context_flush_dev_iotlb(info);
>> +}
>> --
>> 2.34.1
>>
>
> this change moves the entire cache invalidation flow inside
> iommu->lock. Though the directly-affected operations are not in
> critical path the indirect impact applies to all other paths acquiring
> iommu->lock given it'll be held unnecessarily longer after this
> change.
>
> If the only requirement of holding iommu->lock is to walk the pasid
> table, probably we can collect a bitmap of DID's in the locked walk
> and then invalidate each in a loop outside of iommu->lock. Here
> we only care about DIDs associated with the old context entry at
> this point. New pasid attach will hit new context entry. Concurrent
> pasid detach then may just come with duplicated invalidations.
The iommu->lock is not only for the PASID table walk. The basic
schematic here is that once a present context table entry is being
changed, all PASID entries should not be altered until all the related
caches have been flushed. This is because the configuration of the
context entry might also impact PASID translation.
Previously, we did not apply this lock because all those cases involved
changing the context entry from present to non-present, and we were
certain that all PASID entries were empty. Now, as we are making it a
generic helper that also serves scenarios where the entry remains
present after the change, we need this lock to ensure that no PASID
entry changes occur at the same time.
Best regards,
baolu
Powered by blists - more mailing lists