[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <8c4a69eca4d0591f30c112df59c5098c24923bd3.1644543449.git.darren@os.amperecomputing.com>
Date: Thu, 10 Feb 2022 17:42:46 -0800
From: Darren Hart <darren@...amperecomputing.com>
To: LKML <linux-kernel@...r.kernel.org>,
Linux Arm <linux-arm-kernel@...ts.infradead.org>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Barry Song <song.bao.hua@...ilicon.com>,
Valentin Schneider <valentin.schneider@....com>,
"D . Scott Phillips" <scott@...amperecomputing.com>,
Ilkka Koskinen <ilkka@...amperecomputing.com>,
stable@...r.kernel.org
Subject: [PATCH] arm64: smp: Skip MC domain for SoCs without shared cache
SoCs such as the Ampere Altra define clusters but have no shared
processor-side cache. As of v5.16 with CONFIG_SCHED_CLUSTER and
CONFIG_SCHED_MC, build_sched_domain() will BUG() with:
BUG: arch topology borken
the CLS domain not a subset of the MC domain
for each CPU (160 times for a 2 socket 80 core Altra system). The MC
level cpu mask is then extended to that of the CLS child, and is later
removed entirely as redundant.
This change detects when all cpu_coregroup_mask weights=1 and uses an
alternative sched_domain_topology equivalent to the default if
CONFIG_SCHED_MC were disabled.
The final resulting sched domain topology is unchanged with or without
CONFIG_SCHED_CLUSTER, and the BUG is avoided:
For CPU0:
With CLS:
CLS [0-1]
DIE [0-79]
NUMA [0-159]
Without CLS:
DIE [0-79]
NUMA [0-159]
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Barry Song <song.bao.hua@...ilicon.com>
Cc: Valentin Schneider <valentin.schneider@....com>
Cc: D. Scott Phillips <scott@...amperecomputing.com>
Cc: Ilkka Koskinen <ilkka@...amperecomputing.com>
Cc: <stable@...r.kernel.org> # 5.16.x
Signed-off-by: Darren Hart <darren@...amperecomputing.com>
---
arch/arm64/kernel/smp.c | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 27df5c1e6baa..0a78ac5c8830 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -715,9 +715,22 @@ void __init smp_init_cpus(void)
}
}
+static struct sched_domain_topology_level arm64_no_mc_topology[] = {
+#ifdef CONFIG_SCHED_SMT
+ { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
+#endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+ { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
+ { cpu_cpu_mask, SD_INIT_NAME(DIE) },
+ { NULL, },
+};
+
void __init smp_prepare_cpus(unsigned int max_cpus)
{
const struct cpu_operations *ops;
+ bool use_no_mc_topology = true;
int err;
unsigned int cpu;
unsigned int this_cpu;
@@ -758,6 +771,25 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
set_cpu_present(cpu, true);
numa_store_cpu_info(cpu);
+
+ /*
+ * Only use no_mc topology if all cpu_coregroup_mask weights=1
+ */
+ if (cpumask_weight(cpu_coregroup_mask(cpu)) > 1)
+ use_no_mc_topology = false;
+ }
+
+ /*
+ * SoCs with no shared processor-side cache will have cpu_coregroup_mask
+ * weights=1. If they also define clusters with cpu_clustergroup_mask
+ * weights > 1, build_sched_domain() will trigger a BUG as the CLS
+ * cpu_mask will not be a subset of MC. It will extend the MC cpu_mask
+ * to match CLS, and later discard the MC level. Avoid the bug by using
+ * a topology without the MC if the cpu_coregroup_mask weights=1.
+ */
+ if (use_no_mc_topology) {
+ pr_info("cpu_coregroup_mask weights=1, skipping MC topology level");
+ set_sched_topology(arm64_no_mc_topology);
}
}
--
2.31.1
Powered by blists - more mailing lists