[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1fjiwsn.fsf@jogness.linutronix.de>
Date: Tue, 27 Jul 2021 17:53:04 +0206
From: John Ogness <john.ogness@...utronix.de>
To: Petr Mladek <pmladek@...e.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Petr Mladek <pmladek@...e.com>
Subject: Re: [PATCH] lib/nmi_backtrace: Serialize even messages about idle CPUs
On 2021-07-27, Petr Mladek <pmladek@...e.com> wrote:
> The commit 55d6af1d66885059ffc2a ("lib/nmi_backtrace: explicitly serialize
> banner and regs") serialized backtraces from more CPUs using the re-entrant
> printk_printk_cpu lock. It was a preparation step for removing the obsolete
> nmi_safe buffers.
>
> The single-line messages about idle CPUs were not serialized against other
> CPUs and might appear in the middle of backtrace from another CPU,
> for example:
>
> [56394.590068] NMI backtrace for cpu 2
> [56394.590069] CPU: 2 PID: 444 Comm: systemd-journal Not tainted 5.14.0-rc1-default+ #268
> [56394.590071] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-59-gc9ba527-rebuilt.opensuse.org 04/01/2014
> [56394.590072] RIP: 0010:lock_is_held_type+0x0/0x120
> [56394.590071] NMI backtrace for cpu 0 skipped: idling at native_safe_halt+0xb/0x10
> [56394.590076] Code: a2 38 ff 0f 0b 8b 44 24 04 eb bd 48 8d ...
> [56394.590077] RSP: 0018:ffffab02c07c7e68 EFLAGS: 00000246
> [56394.590079] RAX: 0000000000000000 RBX: ffff9a7bc0ec8a40 RCX: ffffffffaab8eb40
>
> It might cause confusion what CPU the following lines belongs to and
> whether the backtraces are really serialized.
I originally implemented this, but later decided against it because it
causes idle CPUs to begin busy-waiting in NMI context in order to log a
single line saying they are idle. If the user is aware that there is
only 1 line for the idle message, then the user knows that it isn't
causing a problem for reading the stack trace.
When triggering many such dumps on systems with many CPUs where this
patch is applied, it seemed like I was making the whole system work
awfully hard for something that should be trivial.
Considering that dump_stack() and show_regs() should be fast and we are
only dumping to the lockless buffer, it is probably OK to be doing all
the busy-waiting. Once atomic consoles are introduced, it will have
quite an impact here, but atomic consoles are mostly only active on
system crash, so I think that would be OK as well.
Feel free to add:
Reviewed-by: John Ogness <john.ogness@...utronix.de>
Powered by blists - more mailing lists