lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu,  7 Dec 2023 17:02:56 -0800
From: Douglas Anderson <>
To: Catalin Marinas <>,
	Will Deacon <>
Cc: Mark Rutland <>,
	Marc Zyngier <>,
	Misono Tomohiro <>,
	Chen-Yu Tsai <>,
	Stephen Boyd <>,
	Daniel Thompson <>,
	Sumit Garg <>,
	Douglas Anderson <>,
	Frederic Weisbecker <>,
	"Guilherme G. Piccoli" <>,
	Josh Poimboeuf <>,
	Kees Cook <>,
	Peter Zijlstra <>,
	Thomas Gleixner <>,
	Tony Luck <>,
	Valentin Schneider <>,,,
Subject: [PATCH] arm64: smp: smp_send_stop() and crash_smp_send_stop() should try non-NMI first

When testing hard lockup handling on my sc7180-trogdor-lazor device
with pseudo-NMI enabled, with serial console enabled and with kgdb
disabled, I found that the stack crawls printed to the serial console
ended up as a jumbled mess. After rebooting, the pstore-based console
looked fine though. Also, enabling kgdb to trap the panic made the
console look fine and avoided the mess.

After a bit of tracking down, I came to the conclusion that this was
what was happening:
1. The panic path was stopping all other CPUs with
2. At least one of those other CPUs was in the middle of printing to
   the serial console and holding the console port's lock, which is
   grabbed with "irqsave". ...but since we were stopping with an NMI
   we didn't care about the "irqsave" and interrupted anyway.
3. Since we stopped the CPU while it was holding the lock it would
   never release it.
4. All future calls to output to the console would end up failing to
   get the lock in qcom_geni_serial_console_write(). This isn't
   _totally_ unexpected at panic time but it's a code path that's not
   well tested, hard to get right, and apparently doesn't work
   terribly well on the Qualcomm geni serial driver.

It would probably be a reasonable idea to try to make the Qualcomm
geni serial driver work better, but also it's nice not to get into
this situation in the first place.

Taking a page from what x86 appears to do in native_stop_other_cpus(),
let's do this:
1. First, we'll try to stop other CPUs with a normal IPI and wait a
   second. This gives them a chance to leave critical sections.
2. If CPUs fail to stop then we'll retry with an NMI, but give a much
   lower timeout since there's no good reason for a CPU not to react
   quickly to a NMI.

This works well and avoids the corrupted console and (presumably)
could help avoid other similar issues.

In order to do this, we need to do a little re-organization of our
IPIs since we don't have any more free IDs. We'll do what was
suggested in previous conversations and combine "stop" and "crash
stop". That frees up an IPI so now we can have a "stop" and "stop

In order to do this we also need a slight change in the way we keep
track of which CPUs still need to be stopped. We need to know
specifically which CPUs haven't stopped yet when we fall back to NMI
but in the "crash stop" case the "cpu_online_mask" isn't updated as
CPUs go down. This is why that code path had an atomic of the number
of CPUs left. We'll solve this by making the cpumask into a
global. This has a potential memory implication--with NR_CPUs = 4096
this is 4096/8 = 512 bytes of globals. On the upside in that same case
we take 512 bytes off the stack which could potentially have made the
stop code less reliable. It can be noted that the NMI backtrace code
(lib/nmi_backtrace.c) uses the same approach and that use also
confirms that updating the mask is safe from NMI.

All of the above lets us combine the logic for "stop" and "crash stop"
code, which appeared to have a bunch of arbitrary implementation
differences. Possibly this could make up for some of the 512 wasted
bytes. ;-)

Aside from the above change where we try a normal IPI and then an NMI,
the combined function has a few subtle differences:
* In the normal smp_send_stop(), if we fail to stop one or more CPUs
  then we won't include the current CPU (the one running
  smp_send_stop()) in the error message.
* In crash_smp_send_stop(), if we fail to stop some CPUs we'll print
  the CPUs that we failed to stop instead of printing all _but_ the
  current running CPU.
* In crash_smp_send_stop(), we will now only print "SMP: stopping
  secondary CPUs" if (system_state <= SYSTEM_RUNNING).

Fixes: d7402513c935 ("arm64: smp: IPI_CPU_STOP and IPI_CPU_CRASH_STOP should try for NMI")
Signed-off-by: Douglas Anderson <>
I'm not setup to test the crash_smp_send_stop(). I made sure it
compiled and hacked the panic() method to call it, but I haven't
actually run kexec. Hopefully others can confirm that it's working for

 arch/arm64/kernel/smp.c | 115 +++++++++++++++++++---------------------
 1 file changed, 54 insertions(+), 61 deletions(-)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index defbab84e9e5..9fe9d4342517 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -71,7 +71,7 @@ enum ipi_msg_type {
@@ -88,6 +88,9 @@ static int ipi_irq_base __ro_after_init;
 static int nr_ipi __ro_after_init = NR_IPI;
 static struct irq_desc *ipi_desc[MAX_IPI] __ro_after_init;
+static DECLARE_BITMAP(stop_mask, NR_CPUS) __read_mostly;
+static bool crash_stop;
 static void ipi_setup(int cpu);
@@ -770,7 +773,7 @@ static const char *ipi_types[NR_IPI] __tracepoint_string = {
 	[IPI_RESCHEDULE]	= "Rescheduling interrupts",
 	[IPI_CALL_FUNC]		= "Function call interrupts",
 	[IPI_CPU_STOP]		= "CPU stop interrupts",
-	[IPI_CPU_CRASH_STOP]	= "CPU stop (for crash dump) interrupts",
+	[IPI_CPU_STOP_NMI]	= "CPU stop NMIs",
 	[IPI_TIMER]		= "Timer broadcast interrupts",
 	[IPI_IRQ_WORK]		= "IRQ work interrupts",
@@ -831,17 +834,11 @@ void __noreturn panic_smp_self_stop(void)
-static atomic_t waiting_for_crash_ipi = ATOMIC_INIT(0);
 static void __noreturn ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs)
 	crash_save_cpu(regs, cpu);
-	atomic_dec(&waiting_for_crash_ipi);
@@ -907,14 +904,13 @@ static void do_handle_IPI(int ipinr)
 	case IPI_CPU_STOP:
-		local_cpu_stop();
-		break;
+		cpumask_clear_cpu(cpu, to_cpumask(stop_mask));
+		if (IS_ENABLED(CONFIG_KEXEC_CORE) && crash_stop) {
 			ipi_cpu_crash_stop(cpu, get_irq_regs());
+		} else {
+			local_cpu_stop();
@@ -969,8 +965,7 @@ static bool ipi_should_be_nmi(enum ipi_msg_type ipi)
 		return false;
 	switch (ipi) {
-	case IPI_CPU_STOP:
 		return true;
@@ -1085,77 +1080,75 @@ void smp_send_stop(void)
 	unsigned long timeout;
-	if (num_other_online_cpus()) {
-		cpumask_t mask;
+	/*
+	 * If this cpu is the only one alive at this point in time, online or
+	 * not, there are no stop messages to be sent around, so just back out.
+	 */
+	if (num_other_online_cpus() == 0)
+		goto skip_ipi;
-		cpumask_copy(&mask, cpu_online_mask);
-		cpumask_clear_cpu(smp_processor_id(), &mask);
+	cpumask_copy(to_cpumask(stop_mask), cpu_online_mask);
+	cpumask_clear_cpu(smp_processor_id(), to_cpumask(stop_mask));
-		if (system_state <= SYSTEM_RUNNING)
-			pr_crit("SMP: stopping secondary CPUs\n");
-		smp_cross_call(&mask, IPI_CPU_STOP);
-	}
+	if (system_state <= SYSTEM_RUNNING)
+		pr_crit("SMP: stopping secondary CPUs\n");
-	/* Wait up to one second for other CPUs to stop */
+	/*
+	 * Start with a normal IPI and wait up to one second for other CPUs to
+	 * stop. We do this first because it gives other processors a chance
+	 * to exit critical sections / drop locks and makes the rest of the
+	 * stop process (especially console flush) more robust.
+	 */
+	smp_cross_call(to_cpumask(stop_mask), IPI_CPU_STOP);
 	timeout = USEC_PER_SEC;
-	while (num_other_online_cpus() && timeout--)
+	while (!cpumask_empty(to_cpumask(stop_mask)) && timeout--)
-	if (num_other_online_cpus())
+	/*
+	 * If CPUs are still online, try an NMI. There's no excuse for this to
+	 * be slow, so we only give them an extra 10 ms to respond.
+	 */
+	if (!cpumask_empty(to_cpumask(stop_mask)) &&
+	    ipi_should_be_nmi(IPI_CPU_STOP_NMI)) {
+		pr_info("SMP: retry stop with NMI for CPUs %*pbl\n",
+			cpumask_pr_args(to_cpumask(stop_mask)));
+		smp_cross_call(to_cpumask(stop_mask), IPI_CPU_STOP_NMI);
+		timeout = USEC_PER_MSEC * 10;
+		while (!cpumask_empty(to_cpumask(stop_mask)) && timeout--)
+			udelay(1);
+	}
+	if (!cpumask_empty(to_cpumask(stop_mask)))
 		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
-			cpumask_pr_args(cpu_online_mask));
+			cpumask_pr_args(to_cpumask(stop_mask)));
 void crash_smp_send_stop(void)
-	static int cpus_stopped;
-	cpumask_t mask;
-	unsigned long timeout;
 	 * This function can be called twice in panic path, but obviously
 	 * we execute this only once.
+	 *
+	 * We use this same boolean to tell whether the IPI we send was a
+	 * stop or a "crash stop".
-	if (cpus_stopped)
+	if (crash_stop)
+	crash_stop = 1;
-	cpus_stopped = 1;
-	/*
-	 * If this cpu is the only one alive at this point in time, online or
-	 * not, there are no stop messages to be sent around, so just back out.
-	 */
-	if (num_other_online_cpus() == 0)
-		goto skip_ipi;
-	cpumask_copy(&mask, cpu_online_mask);
-	cpumask_clear_cpu(smp_processor_id(), &mask);
-	atomic_set(&waiting_for_crash_ipi, num_other_online_cpus());
+	smp_send_stop();
-	pr_crit("SMP: stopping secondary CPUs\n");
-	smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
-	/* Wait up to one second for other CPUs to stop */
-	timeout = USEC_PER_SEC;
-	while ((atomic_read(&waiting_for_crash_ipi) > 0) && timeout--)
-		udelay(1);
-	if (atomic_read(&waiting_for_crash_ipi) > 0)
-		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
-			cpumask_pr_args(&mask));
-	sdei_mask_local_cpu();
 bool smp_crash_stop_failed(void)
-	return (atomic_read(&waiting_for_crash_ipi) > 0);
+	return !cpumask_empty(to_cpumask(stop_mask));

Powered by blists - more mailing lists