[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <07fc34e2-27bb-590e-805d-083985acc39f@linux.intel.com>
Date: Fri, 4 Sep 2020 10:16:02 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: "Tian, Kevin" <kevin.tian@...el.com>,
Joerg Roedel <joro@...tes.org>
Cc: baolu.lu@...ux.intel.com, "Raj, Ashok" <ashok.raj@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: Re: [PATCH v2 1/1] iommu/vt-d: Use device numa domain if RHSA is
missing
Hi Kevin,
On 9/4/20 10:16 AM, Tian, Kevin wrote:
>> From: Lu Baolu
>> Sent: Friday, September 4, 2020 9:03 AM
>>
>> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
>> table, we could default to the device NUMA domain as fall back. This could
>> also benefit a vIOMMU use case where only single vIOMMU is exposed,
>> hence
>> no RHSA will be present but device numa domain can be correct.
>
> My comment on this is not fixed. It is not restricted to single-vIOMMU situation.
> and actually this may also happen on physical platform if some FW doesn't
> provide RHSA information.
Ah, yes. I will remove this sentence since it's same for both bare metal
and virtualization.
>
> with that being fixed:
>
> Reviewed-by: Kevin Tian <kevin.tian@...el.com>
Thank you!
Best regards,
baolu
>
>>
>> Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>
>> Cc: Kevin Tian <kevin.tian@...el.com>
>> Cc: Ashok Raj <ashok.raj@...el.com>
>> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
>> ---
>> drivers/iommu/intel/iommu.c | 37
>> +++++++++++++++++++++++++++++++++++--
>> 1 file changed, 35 insertions(+), 2 deletions(-)
>>
>> Change log:
>> v1->v2:
>> - Add a comment as suggested by Kevin.
>> https://lore.kernel.org/linux-
>> iommu/MWHPR11MB1645E6D6BD1EFDFA139AA37C8C520@...PR11MB1
>> 645.namprd11.prod.outlook.com/
>>
>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>> index 7f844d1c8cd9..69d5a87188f4 100644
>> --- a/drivers/iommu/intel/iommu.c
>> +++ b/drivers/iommu/intel/iommu.c
>> @@ -698,12 +698,47 @@ static int
>> domain_update_iommu_superpage(struct dmar_domain *domain,
>> return fls(mask);
>> }
>>
>> +static int domain_update_device_node(struct dmar_domain *domain)
>> +{
>> + struct device_domain_info *info;
>> + int nid = NUMA_NO_NODE;
>> +
>> + assert_spin_locked(&device_domain_lock);
>> +
>> + if (list_empty(&domain->devices))
>> + return NUMA_NO_NODE;
>> +
>> + list_for_each_entry(info, &domain->devices, link) {
>> + if (!info->dev)
>> + continue;
>> +
>> + /*
>> + * There could possibly be multiple device numa nodes as
>> devices
>> + * within the same domain may sit behind different IOMMUs.
>> There
>> + * isn't perfect answer in such situation, so we select first
>> + * come first served policy.
>> + */
>> + nid = dev_to_node(info->dev);
>> + if (nid != NUMA_NO_NODE)
>> + break;
>> + }
>> +
>> + return nid;
>> +}
>> +
>> /* Some capabilities may be different across iommus */
>> static void domain_update_iommu_cap(struct dmar_domain *domain)
>> {
>> domain_update_iommu_coherency(domain);
>> domain->iommu_snooping =
>> domain_update_iommu_snooping(NULL);
>> domain->iommu_superpage =
>> domain_update_iommu_superpage(domain, NULL);
>> +
>> + /*
>> + * If RHSA is missing, we should default to the device numa domain
>> + * as fall back.
>> + */
>> + if (domain->nid == NUMA_NO_NODE)
>> + domain->nid = domain_update_device_node(domain);
>> }
>>
>> struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
>> bus,
>> @@ -5096,8 +5131,6 @@ static struct iommu_domain
>> *intel_iommu_domain_alloc(unsigned type)
>> if (type == IOMMU_DOMAIN_DMA)
>> intel_init_iova_domain(dmar_domain);
>>
>> - domain_update_iommu_cap(dmar_domain);
>> -
>> domain = &dmar_domain->domain;
>> domain->geometry.aperture_start = 0;
>> domain->geometry.aperture_end =
>> --
>> 2.17.1
>>
>> _______________________________________________
>> iommu mailing list
>> iommu@...ts.linux-foundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
Powered by blists - more mailing lists