lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZqC-TW7ygSSF3MyO@pathway.suse.cz>
Date: Wed, 24 Jul 2024 10:42:11 +0200
From: Petr Mladek <pmladek@...e.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: John Ogness <john.ogness@...utronix.de>,
	Sergey Senozhatsky <senozhatsky@...omium.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	Thomas Gleixner <tglx@...utronix.de>, Jan Kara <jack@...e.cz>,
	Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] printk for 6.11

On Tue 2024-07-23 14:07:12, Linus Torvalds wrote:
> On Tue, 23 Jul 2024 at 13:41, John Ogness <john.ogness@...utronix.de> wrote:
> >
> > Petr's pull request provides the functionality for a CPU to call
> > printk() during emergencies so that each line only goes into the
> > buffer. We also include a function to perform the flush at any time. As
> > the series is implemented now, that flush happens after the warning is
> > completely stored into the buffer. In cases where there is lots of data
> > in the warning (such as in RCU stalls or lockdep splats), the flush
> > happens after significant parts of the warning.
> 
> I really think the flushing needs to be *way* more aggressive for any
> oops. The "flush at end" is not even remotely sane.
> 
> Some amount of buffering can make sense, eg when printing out the
> regular register state over a few lines, there certainly shouldn't be
> anything there that can cause problems.

Yes, this was the intention. I have missed the code path calling
the notifiers. The Oops/die code is more complicated.

Otherwise, nbcon_cpu_emergency_enter()/exit() is
used only around code printing various well defined debug
reports, like WARN(), lockdep, or RCU stall.

Note that the buffering is only in the emergency sections.
The messages are flushed directly after reaching panic().

Just to be sure. The buffering is _not_ there to solve cosmetic
problems. It should allow storing important information before
trying to flush it to consoles. It is because the flushing to consoles
is slow, might trigger softlockups and even cause system to die.


> Let me pick a very specific example of a common thing:
> 
>    int __die(const char *str, struct pt_regs *regs, long err)
> 
> in arch/x86/kernel/dumpstack.c.
> 
> Look, do I expect problems in "__die_header()"? No.
> 
> But the *moment* you call "notify_die()", you are now calling random
> debug code. The register state NEEDS TO HAVE BEEN FLUSHED before this
> point.

This primary goes down to notifiers (random debug code). It would make sense
to try flushing the consoles before calling them. A generic solution
would be:

diff --git a/kernel/notifier.c b/kernel/notifier.c
index b3ce28f39eb6..82989022d8fe 100644
--- a/kernel/notifier.c
+++ b/kernel/notifier.c
@@ -77,6 +77,8 @@ static int notifier_call_chain(struct notifier_block **nl,
 	int ret = NOTIFY_DONE;
 	struct notifier_block *nb, *next_nb;
 
+	nbcon_cpu_emergency_flush();
+
 	nb = rcu_dereference_raw(*nl);
 
 	while (nb && nr_to_call) {


> This is not something I'm willing to debate. Some of the most painful
> debugging sessions I have *EVER* had have been due to "debug code that
> failed".

I could imagine a bug even in WARN() or lockdep. We should probably
add an option to disable the buffering in emergency sections.

All in all, the buffering in emergency sections was just
a "sounds reasonable" idea which popped out during the demo at
Plumbers. It is not a clear win. We just wanted to give it a try.

Will it get acceptable if we flush the messages before calling
notifiers and with an option to disable it? I am not sure
whether it deserves more code. It probably would make sense to remove
it when it causes more harm than good in practice.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ