[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR11MB1645E6D6BD1EFDFA139AA37C8C520@MWHPR11MB1645.namprd11.prod.outlook.com>
Date: Fri, 28 Aug 2020 02:13:43 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Joerg Roedel <joro@...tes.org>
CC: "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
"Raj, Ashok" <ashok.raj@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>
Subject: RE: [PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Thursday, August 27, 2020 1:57 PM
>
> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
> table, we could default to the device NUMA domain as fall back. This also
> benefits the vIOMMU use case where only a single vIOMMU is exposed,
> hence
> no RHSA will be present but device numa domain can be correct.
this benefits vIOMMU but not necessarily only applied to single-vIOMMU
case. The logic still holds in multiple vIOMMU cases as long as RHSA is
not provided.
>
> Cc: Jacob Pan <jacob.jun.pan@...ux.intel.com>
> Cc: Kevin Tian <kevin.tian@...el.com>
> Cc: Ashok Raj <ashok.raj@...el.com>
> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
> ---
> drivers/iommu/intel/iommu.c | 31 +++++++++++++++++++++++++++++--
> 1 file changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index e0516d64d7a3..bce158468abf 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -700,12 +700,41 @@ static int
> domain_update_iommu_superpage(struct dmar_domain *domain,
> return fls(mask);
> }
>
> +static int domain_update_device_node(struct dmar_domain *domain)
> +{
> + struct device_domain_info *info;
> + int nid = NUMA_NO_NODE;
> +
> + assert_spin_locked(&device_domain_lock);
> +
> + if (list_empty(&domain->devices))
> + return NUMA_NO_NODE;
> +
> + list_for_each_entry(info, &domain->devices, link) {
> + if (!info->dev)
> + continue;
> +
> + nid = dev_to_node(info->dev);
> + if (nid != NUMA_NO_NODE)
> + break;
> + }
There could be multiple device numa nodes as devices within the
same domain may sit behind different IOMMUs. Of course there
is no perfect answer in such situation, and this patch is still an
obvious improvement on current always-on-node0 policy. But
some comment about such implication is welcomed.
> +
> + return nid;
> +}
> +
> /* Some capabilities may be different across iommus */
> static void domain_update_iommu_cap(struct dmar_domain *domain)
> {
> domain_update_iommu_coherency(domain);
> domain->iommu_snooping =
> domain_update_iommu_snooping(NULL);
> domain->iommu_superpage =
> domain_update_iommu_superpage(domain, NULL);
> +
> + /*
> + * If RHSA is missing, we should default to the device numa domain
> + * as fall back.
> + */
> + if (domain->nid == NUMA_NO_NODE)
> + domain->nid = domain_update_device_node(domain);
> }
>
> struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
> bus,
> @@ -5086,8 +5115,6 @@ static struct iommu_domain
> *intel_iommu_domain_alloc(unsigned type)
> if (type == IOMMU_DOMAIN_DMA)
> intel_init_iova_domain(dmar_domain);
>
> - domain_update_iommu_cap(dmar_domain);
> -
Is it intended or by mistake? If the former, looks it is a separate fix...
> domain = &dmar_domain->domain;
> domain->geometry.aperture_start = 0;
> domain->geometry.aperture_end =
> --
> 2.17.1
Powered by blists - more mailing lists