[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220822021520.6996-11-kernelfans@gmail.com>
Date: Mon, 22 Aug 2022 10:15:20 +0800
From: Pingfan Liu <kernelfans@...il.com>
To: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Pingfan Liu <kernelfans@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Sudeep Holla <sudeep.holla@....com>,
Phil Auld <pauld@...hat.com>, Rob Herring <robh@...nel.org>,
Ben Dooks <ben-linux@...ff.org>
Subject: [RFC 10/10] arm64: smp: Make __cpu_disable() parallel
On a dying cpu, take_cpu_down()->__cpu_disable(), which means if the
teardown path supports parallel, __cpu_disable() confront the parallel,
which may ruin cpu_online_mask etc if no extra lock provides the
protection.
At present, the cpumask is protected by cpu_add_remove_lock, that lock
is quite above __cpu_disable(). In order to protect __cpu_disable() from
parrallel in kexec quick reboot path, introducing a local lock
cpumap_lock.
Signed-off-by: Pingfan Liu <kernelfans@...il.com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Sudeep Holla <sudeep.holla@....com>
Cc: Phil Auld <pauld@...hat.com>
Cc: Rob Herring <robh@...nel.org>
Cc: Ben Dooks <ben-linux@...ff.org>
To: linux-arm-kernel@...ts.infradead.org
To: linux-kernel@...r.kernel.org
---
arch/arm64/kernel/smp.c | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index ffc5d76cf695..fee8879048b0 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -287,6 +287,28 @@ static int op_cpu_disable(unsigned int cpu)
return 0;
}
+static DEFINE_SPINLOCK(cpumap_lock);
+
+static void __cpu_clear_maps(unsigned int cpu)
+{
+ /*
+ * In the case of kexec rebooting, the cpu_add_remove_lock mutex can not protect
+ */
+ if (kexec_in_progress)
+ spin_lock(&cpumap_lock);
+ remove_cpu_topology(cpu);
+ numa_remove_cpu(cpu);
+
+ /*
+ * Take this CPU offline. Once we clear this, we can't return,
+ * and we must not schedule until we're ready to give up the cpu.
+ */
+ set_cpu_online(cpu, false);
+ if (kexec_in_progress)
+ spin_unlock(&cpumap_lock);
+
+}
+
/*
* __cpu_disable runs on the processor to be shutdown.
*/
@@ -299,14 +321,7 @@ int __cpu_disable(void)
if (ret)
return ret;
- remove_cpu_topology(cpu);
- numa_remove_cpu(cpu);
-
- /*
- * Take this CPU offline. Once we clear this, we can't return,
- * and we must not schedule until we're ready to give up the cpu.
- */
- set_cpu_online(cpu, false);
+ __cpu_clear_maps(cpu);
ipi_teardown(cpu);
/*
--
2.31.1
Powered by blists - more mailing lists