[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR11MB1645A817E0C928BA83002B4C8C2D0@MWHPR11MB1645.namprd11.prod.outlook.com>
Date: Fri, 4 Sep 2020 02:16:18 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Joerg Roedel <joro@...tes.org>
CC: "Raj, Ashok" <ashok.raj@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: RE: [PATCH v2 1/1] iommu/vt-d: Use device numa domain if RHSA is
missing
> From: Lu Baolu
> Sent: Friday, September 4, 2020 9:03 AM
>
> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
> table, we could default to the device NUMA domain as fall back. This could
> also benefit a vIOMMU use case where only single vIOMMU is exposed,
> hence
> no RHSA will be present but device numa domain can be correct.
My comment on this is not fixed. It is not restricted to single-vIOMMU situation.
and actually this may also happen on physical platform if some FW doesn't
provide RHSA information.
with that being fixed:
Reviewed-by: Kevin Tian <kevin.tian@...el.com>
>
> Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> Cc: Kevin Tian <kevin.tian@...el.com>
> Cc: Ashok Raj <ashok.raj@...el.com>
> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
> ---
> drivers/iommu/intel/iommu.c | 37
> +++++++++++++++++++++++++++++++++++--
> 1 file changed, 35 insertions(+), 2 deletions(-)
>
> Change log:
> v1->v2:
> - Add a comment as suggested by Kevin.
> https://lore.kernel.org/linux-
> iommu/MWHPR11MB1645E6D6BD1EFDFA139AA37C8C520@...PR11MB1
> 645.namprd11.prod.outlook.com/
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 7f844d1c8cd9..69d5a87188f4 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -698,12 +698,47 @@ static int
> domain_update_iommu_superpage(struct dmar_domain *domain,
> return fls(mask);
> }
>
> +static int domain_update_device_node(struct dmar_domain *domain)
> +{
> + struct device_domain_info *info;
> + int nid = NUMA_NO_NODE;
> +
> + assert_spin_locked(&device_domain_lock);
> +
> + if (list_empty(&domain->devices))
> + return NUMA_NO_NODE;
> +
> + list_for_each_entry(info, &domain->devices, link) {
> + if (!info->dev)
> + continue;
> +
> + /*
> + * There could possibly be multiple device numa nodes as
> devices
> + * within the same domain may sit behind different IOMMUs.
> There
> + * isn't perfect answer in such situation, so we select first
> + * come first served policy.
> + */
> + nid = dev_to_node(info->dev);
> + if (nid != NUMA_NO_NODE)
> + break;
> + }
> +
> + return nid;
> +}
> +
> /* Some capabilities may be different across iommus */
> static void domain_update_iommu_cap(struct dmar_domain *domain)
> {
> domain_update_iommu_coherency(domain);
> domain->iommu_snooping =
> domain_update_iommu_snooping(NULL);
> domain->iommu_superpage =
> domain_update_iommu_superpage(domain, NULL);
> +
> + /*
> + * If RHSA is missing, we should default to the device numa domain
> + * as fall back.
> + */
> + if (domain->nid == NUMA_NO_NODE)
> + domain->nid = domain_update_device_node(domain);
> }
>
> struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
> bus,
> @@ -5096,8 +5131,6 @@ static struct iommu_domain
> *intel_iommu_domain_alloc(unsigned type)
> if (type == IOMMU_DOMAIN_DMA)
> intel_init_iova_domain(dmar_domain);
>
> - domain_update_iommu_cap(dmar_domain);
> -
> domain = &dmar_domain->domain;
> domain->geometry.aperture_start = 0;
> domain->geometry.aperture_end =
> --
> 2.17.1
>
> _______________________________________________
> iommu mailing list
> iommu@...ts.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
Powered by blists - more mailing lists