[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez3iiN6Y77F_7Rdba6_obhAMaTQ+M0YfGMV2Fk762-5PZg@mail.gmail.com>
Date: Wed, 1 Apr 2020 21:34:29 +0200
From: Jann Horn <jannh@...gle.com>
To: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc: Petr Mladek <pmladek@...e.com>,
Steven Rostedt <rostedt@...dmis.org>,
kernel list <linux-kernel@...r.kernel.org>,
Lech Perczak <l.perczak@...lintechnologies.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Theodore Ts'o" <tytso@....edu>,
John Ogness <john.ogness@...utronix.de>
Subject: Re: [PATCHv2] printk: queue wake_up_klogd irq_work only if per-CPU
areas are ready
On Tue, Mar 3, 2020 at 12:30 PM Sergey Senozhatsky
<sergey.senozhatsky@...il.com> wrote:
> printk_deferred(), similarly to printk_safe/printk_nmi,
> does not immediately attempt to print a new message on
> the consoles, avoiding calls into non-reentrant kernel
> paths, e.g. scheduler or timekeeping, which potentially
> can deadlock the system. Those printk() flavors, instead,
> rely on per-CPU flush irq_work to print messages from
> safer contexts. For same reasons (recursive scheduler or
> timekeeping calls) printk() uses per-CPU irq_work in
> order to wake up user space syslog/kmsg readers.
>
> However, only printk_safe/printk_nmi do make sure that
> per-CPU areas have been initialised and that it's safe
> to modify per-CPU irq_work. This means that, for instance,
> should printk_deferred() be invoked "too early", that
> is before per-CPU areas are initialised, printk_deferred()
> will perform illegal per-CPU access.
>
> Lech Perczak [0] reports that after commit 1b710b1b10ef
> ("char/random: silence a lockdep splat with printk()")
> user-space syslog/kmsg readers are not able to read new
> kernel messages. The reason is printk_deferred() being
> called too early (as was pointed out by Petr and John).
>
> Fix printk_deferred() and do not queue per-CPU irq_work
> before per-CPU areas are initialized.
I ran into the same issue during some development work, and Sergey
directed me to this patch. It fixes the problem for me. Thanks!
Tested-by: Jann Horn <jannh@...gle.com>
Powered by blists - more mailing lists