lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230217120604.435608-1-zengheng4@huawei.com>
Date:   Fri, 17 Feb 2023 20:06:04 +0800
From:   Zeng Heng <zengheng4@...wei.com>
To:     <alexander.shishkin@...ux.intel.com>, <tglx@...utronix.de>,
        <peterz@...radead.org>, <tiwai@...e.de>, <jolsa@...nel.org>,
        <vbabka@...e.cz>, <keescook@...omium.org>, <mingo@...hat.com>,
        <acme@...nel.org>, <namhyung@...nel.org>, <bp@...en8.de>,
        <bhe@...hat.com>, <eric.devolder@...cle.com>, <hpa@...or.com>,
        <jroedel@...e.de>, <dave.hansen@...ux.intel.com>
CC:     <linux-perf-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <liwei391@...wei.com>, <x86@...nel.org>, <xiexiuqi@...wei.com>
Subject: [RFC PATCH v4] x86/kdump: terminate watchdog NMI interrupt to avoid kdump crashes

If the cpu panics within the NMI interrupt context, there could be
unhandled NMI interrupts in the background which are blocked by processor
until next IRET instruction executes. Since that, it prevents nested
NMI handler execution.

In case of IRET execution during kdump reboot and no proper NMIs handler
registered at that point (such as during EFI loader), we need to ensure
watchdog no work any more, or kdump would crash later. So call
perf_event_exit_cpu() at the very last moment in the panic shutdown.

!! Here I know it's not allowed to call perf_event_exit_cpu() within nmi
context, because of mutex_lock, smp_call_function and so on.
Is there any experts know about the similar function which allowed to call
within atomic context (Neither x86_pmu_disable() nor x86_pmu_disable_all()
do work after my practice)?

Thank you in advance.

Here provide one of test case to reproduce the concerned issue:
  1. # cat uncorrected
     CPU 1 BANK 4
     STATUS uncorrected 0xc0
     MCGSTATUS  EIPV MCIP
     ADDR 0x1234
     RIP 0xdeadbabe
     RAISINGCPU 0
     MCGCAP SER CMCI TES 0x6
  2. # modprobe mce_inject
  3. # mce-inject uncorrected

Mce-inject would trigger kernel panic under NMI interrupt context. In
addition, we need another NMI interrupt raise (such as from watchdog)
during panic process. Set proper watchdog threshold value and/or add an
artificial delay to make sure watchdog interrupt raise during the panic
procedure and the involved issue would occur.

Fixes: ca0e22d4f011 ("x86/boot/compressed/64: Always switch to own page table")
Signed-off-by: Zeng Heng <zengheng4@...wei.com>
---
  v1: add dummy NMI interrupt handler in EFI loader
  v2: tidy up changelog, add comments (by Ingo Molnar)
  v3: add iret_to_self() to deal with blocked NMIs in advance
  v4: call perf_event_exit_cpu() to terminate watchdog in panic shutdown

 arch/x86/kernel/crash.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 305514431f26..f46df94bbdad 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -25,6 +25,7 @@
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
 #include <linux/memblock.h>
+#include <linux/perf_event.h>

 #include <asm/processor.h>
 #include <asm/hardirq.h>
@@ -170,6 +171,15 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
 #ifdef CONFIG_HPET_TIMER
 	hpet_disable();
 #endif
+
+	/*
+	 * If the cpu panics within the NMI interrupt context,
+	 * we need to ensure no more NMI interrupts blocked by
+	 * processor. In case of IRET execution during kdump
+	 * path and no proper NMIs handler registered at that
+	 * point, here terminate watchdog in panic shutdown.
+	 */
+	perf_event_exit_cpu(smp_processor_id());
 	crash_save_cpu(regs, safe_smp_processor_id());
 }

--
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ