[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZWTORl4jMi9ODns1@alley>
Date: Mon, 27 Nov 2023 18:13:42 +0100
From: Petr Mladek <pmladek@...e.com>
To: lizhe.67@...edance.com
Cc: dianders@...omium.org, akpm@...ux-foundation.org,
kernelfans@...il.com, lecopzer.chen@...iatek.com,
linux-kernel@...r.kernel.org, lizefan.x@...edance.com,
John Ogness <john.ogness@...utronix.de>
Subject: Re: [RFC] softlockup: serialized softlockup's log
On Wed 2023-11-22 11:53:04, lizhe.67@...edance.com wrote:
> On Fri, 17 Nov 2023 13:45:21 <dianders@...omium.org> wrote:
> >>
> >> From: Li Zhe <lizhe.67@...edance.com>
> >>
> >> If multiple CPUs trigger softlockup at the same time, the softlockup's
> >> logs will appear staggeredly in dmesg, which will affect the viewing of
> >> the logs for developer. Since the code path for outputting softlockup logs
> >> is not a kernel hotspot and the performance requirements for the code
> >> are not strict, locks are used to serialize the softlockup log output
> >> to improve the readability of the logs.
> >>
> >> Signed-off-by: Li Zhe <lizhe.67@...edance.com>
> >> ---
> >> kernel/watchdog.c | 3 +++
> >> 1 file changed, 3 insertions(+)
> >
> >This seems reasonable to me. It might be interesting to talk about in
> >your commit message how this interacts with the various options. From
> >code inspection, I believe:
>
> Thanks for your advice. I will send a V2 patch to optimize my commit
> message.
>
> >* If `softlockup_all_cpu_backtrace` then this is a no-op since other
> >CPUs will be prevented from running the printing code while one is
> >already printing.
Yup.
> Yes your are right. If `softlockup_all_cpu_backtrace` is set, interleaving
> problem is gone. And we don't need to worry about interleaving problem
> in function trigger_allbutcpu_cpu_backtrace() because it has already
> serialized the logs.
>
> >* I'm not 100% sure what happens if `softlockup_panic` is set and I
> >haven't sat down to test this myself. Will one CPUs panic message
> >interleave the other CPUs traces. I guess in the end both CPUs will
> >call panic()? Maybe you could experiment and describe the behavior in
> >your commit message?
>
> I did experiments and checked the implementation of the panic function.
> I have not reproduced interleaving problem with this patch. The panic
> function internally serializes the panic's logs by using variable
> 'panic_cpu'. Besides, function panic() will stop other cpu before outputing
> logs, so I think the interleaving problem between softlockup logs from
> cpu A and the panic logs from softlockup cpu B does not exist.
panic() actually prints a lot of information before stopping
CPUs. They are stopped either inside of __crash_kexec() when
used. Or they are stopped right after when __crash_kexec()
was not called.
Well, the spinlock in watchdog_timer_fn() should be safe from
panic() POV because:
1. panic() disables interrupts before trying to become
"panic_cpu". So that watchdog_timer_fn() should never
trigger on the CPU handling panic().
2. panic() does not wait for other CPUs to finish any work.
It must assume that the system is broken and other CPUs
might be blocked.
> >> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> >> index 5cd6d4e26915..8324ac194d0a 100644
> >> --- a/kernel/watchdog.c
> >> +++ b/kernel/watchdog.c
> >> @@ -448,6 +448,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
> >> struct pt_regs *regs = get_irq_regs();
> >> int duration;
> >> int softlockup_all_cpu_backtrace = sysctl_softlockup_all_cpu_backtrace;
> >> + static DEFINE_SPINLOCK(watchdog_timer_lock);
> >
> >I'd be tempted to define this outside the scope of this function. I
> >need to dig more, but I'm pretty sure I've seen cases where a soft
> >lockup could trigger while I was trying to print traces for a
> >hardlockup, so it might be useful to grab the same spinlock in both
> >places...
>
> I've tried several times, but unfortunately I haven't been able to
> reproduce the problem you mentioned. My concern is that if the lock
> is shared, there will be potential deadlock issues because hardlockup
> exploits nmi.
Yes, one has to be very careful when using locks under NMI.
I am aware of three approaches:
1. It should be safe to use a lock which is used only in NMI.
The only problem might be ABBA deadlocks.
But this is not usable to serialize softlockup and hardlockup.
2. Using try_lock() in NMI and skipping the report when the lock
could not be taken.
But we probably should not skip the detailed info printed by the
hardlockup detector just because another softlockup or
hardlockup report is running in parallel.
3. Using a lock which might be nested on the same CPU.
It does not prevent mixed output completely. The nested
context will print the messages in the middle of the outer
one on the same CPU. But it would serialize output against
other CPUs.
There already is one used to serialize some backtraces,
see printk_cpu_sync_get_irqsave() in dump_stack_lvl() and
nmi_cpu_backtrace().
It should be safe when the locked code does not wait for
other CPUs. And when it is a leaf lock so that there
is no need to think about ABBA deadlocks against
other locks.
IMHO, printk_cpu_sync_get_irqsave() probably should be
safe to serialize the softlockup and hardlockup
detectors.
Adding John into Cc. He is familiar with
printk_cpu_sync_get_irqsave(). He wanted to use it also
for synchronizing console drivers. Well, it will be done
another way in the end.
Best Regards,
Petr
Powered by blists - more mailing lists