[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aKWh2R0ZVZ7nnLiw@pathway.suse.cz>
Date: Wed, 20 Aug 2025 12:22:17 +0200
From: Petr Mladek <pmladek@...e.com>
To: Jinchao Wang <wangjinchao600@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...nel.org>
Cc: John Ogness <john.ogness@...utronix.de>,
Joel Granados <joel.granados@...nel.org>,
Dave Jiang <dave.jiang@...el.com>,
Sravan Kumar Gundu <sravankumarlpu@...il.com>,
Ryo Takakura <takakura@...inux.co.jp>, linux-kernel@...r.kernel.org,
Wei Liu <wei.liu@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>
Subject: Re: [PATCH] panic: call hardlockup_detector_perf_stop in panic
Adding Peter Zijlstra into Cc.
The nested panic() should return. But panic() was never supposed to
return. It seems that it is not marked as noreturn but I am not sure
whether some tricks are not hidden somewhere, in objtool, or...
On Wed 2025-08-20 14:22:52, Jinchao Wang wrote:
> On 8/19/25 23:01, Petr Mladek wrote:
> > On Wed 2025-07-30 11:06:33, Wang Jinchao wrote:
> > > When a panic happens, it blocks the cpu, which may
> > > trigger the hardlockup detector if some dump is slow.
> > > So call hardlockup_detector_perf_stop() to disable
> > > hardlockup dector.
> >
> > Could you please provide more details, especially the log showing
> > the problem?
>
> Here's what happened: I configured the kernel to use efi-pstore for kdump
> logging while enabling the perf hard lockup detector (NMI). Perhaps the
> efi-pstore was slow and there were too many logs. When the first panic was
> triggered, the pstore dump callback in kmsg_dump()->dumper->dump() took a
> long time, which triggered the NMI watchdog. Then emergency_restart()
> triggered the machine restart before the efi-pstore operation finished.
> The function call flow looked like this:
>
> ```c
> real panic() {
> kmsg_dump() {
> ...
> pstore_dump() {
> start_dump();
> ... // long time operation triggers NMI watchdog
> nmi panic() {
> ...
> emergency_restart(); //pstore unfinished
> }
> ...
> finish_dump(); // never reached
> }
> }
> }
> ```
>
> This created a nested panic situation where the second panic interrupted
> the crash dump process, causing the loss of the original panic information.
I believe that we should prevent the nested panic() in the first
place. There already is the following code:
void vpanic(const char *fmt, va_list args)
{
[...]
* Only one CPU is allowed to execute the panic code from here. For
* multiple parallel invocations of panic, all other CPUs either
* stop themself or will wait until they are stopped by the 1st CPU
* with smp_send_stop().
*
* cmpxchg success means this is the 1st CPU which comes here,
* so go ahead.
* `old_cpu == this_cpu' means we came from nmi_panic() which sets
* panic_cpu to this CPU. In this case, this is also the 1st CPU.
*/
old_cpu = PANIC_CPU_INVALID;
this_cpu = raw_smp_processor_id();
/* atomic_try_cmpxchg updates old_cpu on failure */
if (atomic_try_cmpxchg(&panic_cpu, &old_cpu, this_cpu)) {
/* go ahead */
} else if (old_cpu != this_cpu)
panic_smp_self_stop();
We should improve it to detect nested panic() call as well,
something like:
this_cpu = raw_smp_processor_id();
/* Bail out in a nested panic(). Let the outer one finish the job. */
if (this_cpu == atomic_read(&panic_cpu))
return;
/* atomic_try_cmpxchg updates old_cpu on failure */
old_cpu = PANIC_CPU_INVALID;
if (atomic_try_cmpxchg(&panic_cpu, &old_cpu, this_cpu)) {
/* go ahead */
} else if (old_cpu != this_cpu)
panic_smp_self_stop();
> > That said, it might make sense to disable the hardlockup
> > detector during panic. But I do not like the proposed way,
> > see below.
> >
> > > --- a/kernel/panic.c
> > > +++ b/kernel/panic.c
> > > @@ -339,6 +339,7 @@ void panic(const char *fmt, ...)
> > > */
> > > local_irq_disable();
> > > preempt_disable_notrace();
> > > + hardlockup_detector_perf_stop();
> >
> > Anyway, it does not look safe. panic() might be called in any context,
> > including NMI, and I see:
> >
> > + hardlockup_detector_perf_stop()
> > + perf_event_disable()
> > + perf_event_ctx_lock()
> > + mutex_lock_nested()
> >
> > This might cause deadlock when called in NMI, definitely.
> >
> > Alternative:
> >
> > A conservative approach would be to update watchdog_hardlockup_check()
> > so that it does nothing when panic_in_progress() returns true. It
> > would even work for both hardlockup detectors implementation.
> Yes, I think it is a better solution.
> I didn't find panic_in_progress() but found hardlockup_detector_perf_stop()
> available instead :)
> I will send another patch.
OK.
Best Regards,
Petr
Powered by blists - more mailing lists