[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200116231757.610546114@linuxfoundation.org>
Date: Fri, 17 Jan 2020 00:17:42 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Sudeep Holla <sudeep.holla@....com>,
Lukasz Luba <lukasz.luba@....com>,
Ondrej Jirman <megous@...ous.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Russell King <rmk+kernel@...linux.org.uk>
Subject: [PATCH 5.4 145/203] ARM: 8943/1: Fix topology setup in case of CPU hotplug for CONFIG_SCHED_MC
From: Dietmar Eggemann <dietmar.eggemann@....com>
commit ff98a5f624d2910de050f1fc7f2a32769da86b51 upstream.
Commit ca74b316df96 ("arm: Use common cpu_topology structure and
functions.") changed cpu_coregroup_mask() from the ARM32 specific
implementation in arch/arm/include/asm/topology.h to the one shared
with ARM64 and RISCV in drivers/base/arch_topology.c.
Currently on ARM32 (TC2 w/ CONFIG_SCHED_MC) the task scheduler setup
code (w/ CONFIG_SCHED_DEBUG) shows this during CPU hotplug:
ERROR: groups don't span domain->span
It happens to CPUs of the cluster of the CPU which gets hot-plugged
out on scheduler domain MC.
Turns out that the shared cpu_coregroup_mask() requires that the
hot-plugged CPU is removed from the core_sibling mask via
remove_cpu_topology(). Otherwise the 'is core_sibling subset of
cpumask_of_node()' doesn't work. In this case the task scheduler has to
deal with cpumask_of_node instead of core_sibling which is wrong on
scheduler domain MC.
e.g. CPU3 hot-plugged out on TC2 [cluster0: 0,3-4 cluster1: 1-2]:
cpu_coregroup_mask(): CPU3 cpumask_of_node=0-2,4 core_sibling=0,3-4
^
should be:
cpu_coregroup_mask(): CPU3 cpumask_of_node=0-2,4 core_sibling=0,4
Add remove_cpu_topology() to __cpu_disable() to remove the CPU from the
topology masks in case of a CPU hotplug out operation.
At the same time tweak store_cpu_topology() slightly so it will call
update_siblings_masks() in case of CPU hotplug in operation via
secondary_start_kernel()->smp_store_cpu_info().
This aligns the ARM32 implementation with the ARM64 one.
Guarding remove_cpu_topology() with CONFIG_GENERIC_ARCH_TOPOLOGY is
necessary since some Arm32 defconfigs (aspeed_g5_defconfig,
milbeaut_m10v_defconfig, spear13xx_defconfig) specify an explicit
# CONFIG_ARM_CPU_TOPOLOGY is not set
w/ ./arch/arm/Kconfig: select GENERIC_ARCH_TOPOLOGY if ARM_CPU_TOPOLOGY
Fixes: ca74b316df96 ("arm: Use common cpu_topology structure and functions")
Reviewed-by: Sudeep Holla <sudeep.holla@....com>
Reviewed-by: Lukasz Luba <lukasz.luba@....com>
Tested-by: Lukasz Luba <lukasz.luba@....com>
Tested-by: Ondrej Jirman <megous@...ous.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Russell King <rmk+kernel@...linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm/kernel/smp.c | 4 ++++
arch/arm/kernel/topology.c | 10 +++++-----
2 files changed, 9 insertions(+), 5 deletions(-)
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -240,6 +240,10 @@ int __cpu_disable(void)
if (ret)
return ret;
+#ifdef CONFIG_GENERIC_ARCH_TOPOLOGY
+ remove_cpu_topology(cpu);
+#endif
+
/*
* Take this CPU offline. Once we clear this, we can't return,
* and we must not schedule until we're ready to give up the cpu.
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -196,9 +196,8 @@ void store_cpu_topology(unsigned int cpu
struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
unsigned int mpidr;
- /* If the cpu topology has been already set, just return */
- if (cpuid_topo->core_id != -1)
- return;
+ if (cpuid_topo->package_id != -1)
+ goto topology_populated;
mpidr = read_cpuid_mpidr();
@@ -231,14 +230,15 @@ void store_cpu_topology(unsigned int cpu
cpuid_topo->package_id = -1;
}
- update_siblings_masks(cpuid);
-
update_cpu_capacity(cpuid);
pr_info("CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n",
cpuid, cpu_topology[cpuid].thread_id,
cpu_topology[cpuid].core_id,
cpu_topology[cpuid].package_id, mpidr);
+
+topology_populated:
+ update_siblings_masks(cpuid);
}
static inline int cpu_corepower_flags(void)
Powered by blists - more mailing lists