[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <757c4e10-519a-6736-8f22-7ae7308434b4@linux.intel.com>
Date: Mon, 31 Aug 2020 09:16:03 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: "Tian, Kevin" <kevin.tian@...el.com>,
Joerg Roedel <joro@...tes.org>
Cc: baolu.lu@...ux.intel.com,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
"Raj, Ashok" <ashok.raj@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>
Subject: Re: [PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing
Hi Kevin,
Thanks a lot for looking at my patch.
On 8/28/20 10:13 AM, Tian, Kevin wrote:
>> From: Lu Baolu <baolu.lu@...ux.intel.com>
>> Sent: Thursday, August 27, 2020 1:57 PM
>>
>> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
>> table, we could default to the device NUMA domain as fall back. This also
>> benefits the vIOMMU use case where only a single vIOMMU is exposed,
>> hence
>> no RHSA will be present but device numa domain can be correct.
>
> this benefits vIOMMU but not necessarily only applied to single-vIOMMU
> case. The logic still holds in multiple vIOMMU cases as long as RHSA is
> not provided.
Yes. Will refine the description.
>
>>
>> Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>
>> Cc: Kevin Tian <kevin.tian@...el.com>
>> Cc: Ashok Raj <ashok.raj@...el.com>
>> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
>> ---
>> drivers/iommu/intel/iommu.c | 31 +++++++++++++++++++++++++++++--
>> 1 file changed, 29 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
>> index e0516d64d7a3..bce158468abf 100644
>> --- a/drivers/iommu/intel/iommu.c
>> +++ b/drivers/iommu/intel/iommu.c
>> @@ -700,12 +700,41 @@ static int
>> domain_update_iommu_superpage(struct dmar_domain *domain,
>> return fls(mask);
>> }
>>
>> +static int domain_update_device_node(struct dmar_domain *domain)
>> +{
>> + struct device_domain_info *info;
>> + int nid = NUMA_NO_NODE;
>> +
>> + assert_spin_locked(&device_domain_lock);
>> +
>> + if (list_empty(&domain->devices))
>> + return NUMA_NO_NODE;
>> +
>> + list_for_each_entry(info, &domain->devices, link) {
>> + if (!info->dev)
>> + continue;
>> +
>> + nid = dev_to_node(info->dev);
>> + if (nid != NUMA_NO_NODE)
>> + break;
>> + }
>
> There could be multiple device numa nodes as devices within the
> same domain may sit behind different IOMMUs. Of course there
> is no perfect answer in such situation, and this patch is still an
> obvious improvement on current always-on-node0 policy. But
> some comment about such implication is welcomed.
I will add some comments here. Without more knowledge, currently we
choose to use the first hit.
>
>> +
>> + return nid;
>> +}
>> +
>> /* Some capabilities may be different across iommus */
>> static void domain_update_iommu_cap(struct dmar_domain *domain)
>> {
>> domain_update_iommu_coherency(domain);
>> domain->iommu_snooping =
>> domain_update_iommu_snooping(NULL);
>> domain->iommu_superpage =
>> domain_update_iommu_superpage(domain, NULL);
>> +
>> + /*
>> + * If RHSA is missing, we should default to the device numa domain
>> + * as fall back.
>> + */
>> + if (domain->nid == NUMA_NO_NODE)
>> + domain->nid = domain_update_device_node(domain);
>> }
>>
>> struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
>> bus,
>> @@ -5086,8 +5115,6 @@ static struct iommu_domain
>> *intel_iommu_domain_alloc(unsigned type)
>> if (type == IOMMU_DOMAIN_DMA)
>> intel_init_iova_domain(dmar_domain);
>>
>> - domain_update_iommu_cap(dmar_domain);
>> -
>
> Is it intended or by mistake? If the former, looks it is a separate fix...
It's a cleanup. When a domain is newly created, this function is
actually a no-op.
>
>> domain = &dmar_domain->domain;
>> domain->geometry.aperture_start = 0;
>> domain->geometry.aperture_end =
>> --
>> 2.17.1
>
Best regards,
baolu
Powered by blists - more mailing lists