[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <970366ee-0fc0-4a64-816e-3c3ac738e24a@gmail.com>
Date: Wed, 20 Aug 2025 14:22:52 +0800
From: Jinchao Wang <wangjinchao600@...il.com>
To: Petr Mladek <pmladek@...e.com>
Cc: John Ogness <john.ogness@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Joel Granados <joel.granados@...nel.org>, Dave Jiang <dave.jiang@...el.com>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Sravan Kumar Gundu <sravankumarlpu@...il.com>,
Ryo Takakura <takakura@...inux.co.jp>, linux-kernel@...r.kernel.org,
Wei Liu <wei.liu@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>
Subject: Re: [PATCH] panic: call hardlockup_detector_perf_stop in panic
On 8/19/25 23:01, Petr Mladek wrote:
> On Wed 2025-07-30 11:06:33, Wang Jinchao wrote:
>> When a panic happens, it blocks the cpu, which may
>> trigger the hardlockup detector if some dump is slow.
>> So call hardlockup_detector_perf_stop() to disable
>> hardlockup dector.
>
> Could you please provide more details, especially the log showing
> the problem?
Here's what happened: I configured the kernel to use efi-pstore for kdump
logging while enabling the perf hard lockup detector (NMI). Perhaps the
efi-pstore was slow and there were too many logs. When the first panic was
triggered, the pstore dump callback in kmsg_dump()->dumper->dump() took a
long time, which triggered the NMI watchdog. Then emergency_restart()
triggered the machine restart before the efi-pstore operation finished.
The function call flow looked like this:
```c
real panic() {
kmsg_dump() {
...
pstore_dump() {
start_dump();
... // long time operation triggers NMI watchdog
nmi panic() {
...
emergency_restart(); //pstore unfinished
}
...
finish_dump(); // never reached
}
}
}
```
This created a nested panic situation where the second panic interrupted
the crash dump process, causing the loss of the original panic information.
>
> I wonder if this is similar to
> https://lore.kernel.org/all/SN6PR02MB4157A4C5E8CB219A75263A17D46DA@SN6PR02MB4157.namprd02.prod.outlook.com/
>
> There was a problem that a non-panic CPU might get stuck in
> pl011_console_write_thread() or any other con->write_thread()
> callback because nbcon_reacquire_nobuf(wctxt) ended in an infinite
> loop.
>
> It was a real lockup. It has got recently fixed in 6.17-rc1 by
> the commit 571c1ea91a73db56bd94 ("printk: nbcon: Allow reacquire
> during panic"), see
> https://patch.msgid.link/20250606185549.900611-1-john.ogness@linutronix.de
> It is possible that it fixed your problem as well.
>
> That said, it might make sense to disable the hardlockup
> detector during panic. But I do not like the proposed way,
> see below.
>
>> --- a/kernel/panic.c
>> +++ b/kernel/panic.c
>> @@ -339,6 +339,7 @@ void panic(const char *fmt, ...)
>> */
>> local_irq_disable();
>> preempt_disable_notrace();
>> + hardlockup_detector_perf_stop();
>
> I see the following in kernel/watchdog_perf.c:
>
> /**
> * hardlockup_detector_perf_stop - Globally stop watchdog events
> *
> * Special interface for x86 to handle the perf HT bug.
> */
> void __init hardlockup_detector_perf_stop(void)
> {
> [...]
> lockdep_assert_cpus_held();
> [...]
> }
>
> 1. It is suspicious to see an x86-specific "hacky" function called in
> the generic panic().
>
> Is this safe?
> What about other hardlockup detectors?
>
>
> 2. I expect that lockdep_assert_cpus_held() would complain
> when CONFIG_LOCKDEP was enabled.
>
>
> Anyway, it does not look safe. panic() might be called in any context,
> including NMI, and I see:
>
> + hardlockup_detector_perf_stop()
> + perf_event_disable()
> + perf_event_ctx_lock()
> + mutex_lock_nested()
>
> This might cause deadlock when called in NMI, definitely.
>
> Alternative:
>
> A conservative approach would be to update watchdog_hardlockup_check()
> so that it does nothing when panic_in_progress() returns true. It
> would even work for both hardlockup detectors implementation.
Yes, I think it is a better solution.
I didn't find panic_in_progress() but found
hardlockup_detector_perf_stop() available instead :)
I will send another patch.
>
> Best Regards,
> Petr
--
Best regards,
Jinchao
Powered by blists - more mailing lists