[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d08ywj61.fsf@mpe.ellerman.id.au>
Date: Fri, 27 Mar 2020 14:50:14 +1100
From: Michael Ellerman <mpe@...erman.id.au>
To: Leonardo Bras <leonardo@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Alexios Zavras <alexios.zavras@...el.com>,
Christophe Leroy <christophe.leroy@....fr>,
Leonardo Bras <leonardo@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2 1/1] ppc/crash: Skip spinlocks during crash
Hi Leonardo,
Leonardo Bras <leonardo@...ux.ibm.com> writes:
> During a crash, there is chance that the cpus that handle the NMI IPI
> are holding a spin_lock. If this spin_lock is needed by crashing_cpu it
> will cause a deadlock. (rtas_lock and printk logbuf_log as of today)
Please give us more detail on how those locks are causing you trouble, a
stack trace would be good if you have it.
> This is a problem if the system has kdump set up, given if it crashes
> for any reason kdump may not be saved for crash analysis.
>
> Skip spinlocks after NMI IPI is sent to all other cpus.
We don't want to add overhead to all spinlocks for the life of the
system, just to handle this one case.
There's already a flag that is set when the system is crashing,
"oops_in_progress", maybe we need to use that somewhere to skip a lock
or do an early return.
cheers
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 860228e917dc..a6381d110795 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -111,6 +111,8 @@ static inline void splpar_spin_yield(arch_spinlock_t *lock) {};
> static inline void splpar_rw_yield(arch_rwlock_t *lock) {};
> #endif
>
> +extern bool crash_skip_spinlock __read_mostly;
> +
> static inline bool is_shared_processor(void)
> {
> #ifdef CONFIG_PPC_SPLPAR
> @@ -142,6 +144,8 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> if (likely(__arch_spin_trylock(lock) == 0))
> break;
> do {
> + if (unlikely(crash_skip_spinlock))
> + return;
> HMT_low();
> if (is_shared_processor())
> splpar_spin_yield(lock);
> @@ -161,6 +165,8 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> local_save_flags(flags_dis);
> local_irq_restore(flags);
> do {
> + if (unlikely(crash_skip_spinlock))
> + return;
> HMT_low();
> if (is_shared_processor())
> splpar_spin_yield(lock);
> diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c
> index d488311efab1..ae081f0f2472 100644
> --- a/arch/powerpc/kexec/crash.c
> +++ b/arch/powerpc/kexec/crash.c
> @@ -66,6 +66,9 @@ static int handle_fault(struct pt_regs *regs)
>
> #ifdef CONFIG_SMP
>
> +bool crash_skip_spinlock;
> +EXPORT_SYMBOL(crash_skip_spinlock);
> +
> static atomic_t cpus_in_crash;
> void crash_ipi_callback(struct pt_regs *regs)
> {
> @@ -129,6 +132,7 @@ static void crash_kexec_prepare_cpus(int cpu)
> /* Would it be better to replace the trap vector here? */
>
> if (atomic_read(&cpus_in_crash) >= ncpus) {
> + crash_skip_spinlock = true;
> printk(KERN_EMERG "IPI complete\n");
> return;
> }
> --
> 2.24.1
Powered by blists - more mailing lists