[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200326222836.501404-1-leonardo@linux.ibm.com>
Date: Thu, 26 Mar 2020 19:28:37 -0300
From: Leonardo Bras <leonardo@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Enrico Weigelt <info@...ux.net>,
Leonardo Bras <leonardo@...ux.ibm.com>,
Allison Randal <allison@...utok.net>,
Christophe Leroy <christophe.leroy@....fr>,
Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH 1/1] ppc/crash: Skip spinlocks during crash
During a crash, there is chance that the cpus that handle the NMI IPI
are holding a spin_lock. If this spin_lock is needed by crashing_cpu it
will cause a deadlock. (rtas_lock and printk logbuf_log as of today)
This is a problem if the system has kdump set up, given if it crashes
for any reason kdump may not be saved for crash analysis.
Skip spinlocks after NMI IPI is sent to all other cpus.
Signed-off-by: Leonardo Bras <leonardo@...ux.ibm.com>
---
arch/powerpc/include/asm/spinlock.h | 6 ++++++
arch/powerpc/kexec/crash.c | 3 +++
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 860228e917dc..a6381d110795 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -111,6 +111,8 @@ static inline void splpar_spin_yield(arch_spinlock_t *lock) {};
static inline void splpar_rw_yield(arch_rwlock_t *lock) {};
#endif
+extern bool crash_skip_spinlock __read_mostly;
+
static inline bool is_shared_processor(void)
{
#ifdef CONFIG_PPC_SPLPAR
@@ -142,6 +144,8 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
if (likely(__arch_spin_trylock(lock) == 0))
break;
do {
+ if (unlikely(crash_skip_spinlock))
+ return;
HMT_low();
if (is_shared_processor())
splpar_spin_yield(lock);
@@ -161,6 +165,8 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
local_save_flags(flags_dis);
local_irq_restore(flags);
do {
+ if (unlikely(crash_skip_spinlock))
+ return;
HMT_low();
if (is_shared_processor())
splpar_spin_yield(lock);
diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c
index d488311efab1..8a522380027d 100644
--- a/arch/powerpc/kexec/crash.c
+++ b/arch/powerpc/kexec/crash.c
@@ -66,6 +66,8 @@ static int handle_fault(struct pt_regs *regs)
#ifdef CONFIG_SMP
+bool crash_skip_spinlock;
+
static atomic_t cpus_in_crash;
void crash_ipi_callback(struct pt_regs *regs)
{
@@ -129,6 +131,7 @@ static void crash_kexec_prepare_cpus(int cpu)
/* Would it be better to replace the trap vector here? */
if (atomic_read(&cpus_in_crash) >= ncpus) {
+ crash_skip_spinlock = true;
printk(KERN_EMERG "IPI complete\n");
return;
}
--
2.24.1
Powered by blists - more mailing lists