[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f739f3b-b2a1-8a81-e134-738bdf2c44eb@linux.intel.com>
Date: Wed, 4 May 2022 16:06:49 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jacob Pan <jacob.jun.pan@...el.com>
Cc: Joerg Roedel <joro@...tes.org>, Jason Gunthorpe <jgg@...dia.com>,
Alex Williamson <alex.williamson@...hat.com>,
Kevin Tian <kevin.tian@...el.com>,
Liu Yi L <yi.l.liu@...el.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/5] iommu/vt-d: Check domain force_snooping against
attached devices
On 2022/5/3 05:31, Jacob Pan wrote:
> Hi BaoLu,
Hi Jacob,
>
> On Sun, 1 May 2022 19:24:32 +0800, Lu Baolu <baolu.lu@...ux.intel.com>
> wrote:
>
>> As domain->force_snooping only impacts the devices attached with the
>> domain, there's no need to check against all IOMMU units. At the same
>> time, for a brand new domain (hasn't been attached to any device), the
>> force_snooping field could be set, but the attach_dev callback will
>> return failure if it wants to attach to a device which IOMMU has no
>> snoop control capability.
>>
>> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
>> ---
>> drivers/iommu/intel/pasid.h | 2 ++
>> drivers/iommu/intel/iommu.c | 50 ++++++++++++++++++++++++++++++++++++-
>> drivers/iommu/intel/pasid.c | 18 +++++++++++++
>> 3 files changed, 69 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
>> index ab4408c824a5..583ea67fc783 100644
>> --- a/drivers/iommu/intel/pasid.h
>> +++ b/drivers/iommu/intel/pasid.h
>> @@ -123,4 +123,6 @@ void intel_pasid_tear_down_entry(struct intel_iommu
>> *iommu, bool fault_ignore);
>> int vcmd_alloc_pasid(struct intel_iommu *iommu, u32 *pasid);
>> void vcmd_free_pasid(struct intel_iommu *iommu, u32 pasid);
>> +void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
>> + struct device *dev, u32 pasid);
>> #endif /* __INTEL_PASID_H */
>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>> index 98050943d863..3c1c228f9031 100644
>> --- a/drivers/iommu/intel/iommu.c
>> +++ b/drivers/iommu/intel/iommu.c
>> @@ -4554,13 +4554,61 @@ static phys_addr_t
>> intel_iommu_iova_to_phys(struct iommu_domain *domain, return phys;
>> }
>>
>> +static bool domain_support_force_snooping(struct dmar_domain *domain)
>> +{
>> + struct device_domain_info *info;
>> + unsigned long flags;
>> + bool support = true;
>> +
>> + spin_lock_irqsave(&device_domain_lock, flags);
>> + if (list_empty(&domain->devices))
>> + goto out;
>> +
>> + list_for_each_entry(info, &domain->devices, link) {
>> + if (!ecap_sc_support(info->iommu->ecap)) {
>> + support = false;
>> + break;
>> + }
>> + }
> why not just check the flag dmar_domain->force_snooping? devices wouldn't
> be able to attach if !ecap_sc, right?
I should check "dmar_domain->force_snooping" first. If this is the first
time that this flag is about to set, then check the capabilities.
>
>> +out:
>> + spin_unlock_irqrestore(&device_domain_lock, flags);
>> + return support;
>> +}
>> +
>> +static void domain_set_force_snooping(struct dmar_domain *domain)
>> +{
>> + struct device_domain_info *info;
>> + unsigned long flags;
>> +
>> + /*
>> + * Second level page table supports per-PTE snoop control. The
>> + * iommu_map() interface will handle this by setting SNP bit.
>> + */
>> + if (!domain_use_first_level(domain))
>> + return;
>> +
>> + spin_lock_irqsave(&device_domain_lock, flags);
>> + if (list_empty(&domain->devices))
>> + goto out_unlock;
>> +
>> + list_for_each_entry(info, &domain->devices, link)
>> + intel_pasid_setup_page_snoop_control(info->iommu,
>> info->dev,
>> + PASID_RID2PASID);
>> +
> I guess other DMA API PASIDs need to have sc bit set as well. I will keep
> this in mind for my DMA API PASID patch.
Kernel DMA don't need to set the PGSNP bit. The x86 arch is always DMA
coherent. The force snooping is only needed when the device is
controlled by user space, but the VMM is optimized not to support the
virtualization of the wbinv instruction.
>
>> +out_unlock:
>> + spin_unlock_irqrestore(&device_domain_lock, flags);
>> +}
>> +
>> static bool intel_iommu_enforce_cache_coherency(struct iommu_domain
>> *domain) {
>> struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>>
>> - if (!domain_update_iommu_snooping(NULL))
>> + if (!domain_support_force_snooping(dmar_domain))
>> return false;
>> +
>> + domain_set_force_snooping(dmar_domain);
>> dmar_domain->force_snooping = true;
>> +
> nit: spurious change
I expect a blank line before return in the end.
>> return true;
>> }
>>
>> diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
>> index f8d215d85695..815c744e6a34 100644
>> --- a/drivers/iommu/intel/pasid.c
>> +++ b/drivers/iommu/intel/pasid.c
>> @@ -762,3 +762,21 @@ int intel_pasid_setup_pass_through(struct
>> intel_iommu *iommu,
>> return 0;
>> }
>> +
>> +/*
>> + * Set the page snoop control for a pasid entry which has been set up.
>> + */
>> +void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
>> + struct device *dev, u32 pasid)
>> +{
>> + struct pasid_entry *pte;
>> + u16 did;
>> +
>> + pte = intel_pasid_get_entry(dev, pasid);
>> + if (WARN_ON(!pte || !pasid_pte_is_present(pte)))
>> + return;
>> +
>> + pasid_set_pgsnp(pte);
>> + did = pasid_get_domain_id(pte);
>> + pasid_flush_caches(iommu, pte, pasid, did);
>> +}
>
>
> Thanks,
>
> Jacob
Best regards,
baolu
Powered by blists - more mailing lists