[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fske3wzw.fsf@jogness.linutronix.de>
Date: Thu, 09 Jun 2022 13:25:15 +0206
From: John Ogness <john.ogness@...utronix.de>
To: Geert Uytterhoeven <geert@...ux-m68k.org>
Cc: Marek Szyprowski <m.szyprowski@...sung.com>,
Petr Mladek <pmladek@...e.com>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"open list:ARM/Amlogic Meson..." <linux-amlogic@...ts.infradead.org>,
"Theodore Ts'o" <tytso@....edu>,
"Jason A. Donenfeld" <Jason@...c4.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>, kasan-dev@...glegroups.com
Subject: Re: [PATCH printk v5 1/1] printk: extend console_lock for
per-console locking
(Added RANDOM NUMBER DRIVER and KFENCE people.)
Hi Geert,
On 2022-06-08, Geert Uytterhoeven <geert@...ux-m68k.org> wrote:
> =============================
> [ BUG: Invalid wait context ]
> 5.19.0-rc1-ebisu-00802-g06a0dd60d6e4 #431 Not tainted
> -----------------------------
> swapper/0/1 is trying to lock:
> ffffffc00910bac8 (base_crng.lock){....}-{3:3}, at:
> crng_make_state+0x148/0x1e4
> other info that might help us debug this:
> context-{5:5}
> 2 locks held by swapper/0/1:
> #0: ffffffc008f8ae00 (console_lock){+.+.}-{0:0}, at:
> printk_activate_kthreads+0x10/0x54
> #1: ffffffc009da4a28 (&meta->lock){....}-{2:2}, at:
> __kfence_alloc+0x378/0x5c4
> stack backtrace:
> CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> 5.19.0-rc1-ebisu-00802-g06a0dd60d6e4 #431
> Hardware name: Renesas Ebisu-4D board based on r8a77990 (DT)
> Call trace:
> dump_backtrace.part.0+0x98/0xc0
> show_stack+0x14/0x28
> dump_stack_lvl+0xac/0xec
> dump_stack+0x14/0x2c
> __lock_acquire+0x388/0x10a0
> lock_acquire+0x190/0x2c0
> _raw_spin_lock_irqsave+0x6c/0x94
> crng_make_state+0x148/0x1e4
> _get_random_bytes.part.0+0x4c/0xe8
> get_random_u32+0x4c/0x140
> __kfence_alloc+0x460/0x5c4
> kmem_cache_alloc_trace+0x194/0x1dc
> __kthread_create_on_node+0x5c/0x1a8
> kthread_create_on_node+0x58/0x7c
> printk_start_kthread.part.0+0x34/0xa8
> printk_activate_kthreads+0x4c/0x54
> do_one_initcall+0xec/0x278
> kernel_init_freeable+0x11c/0x214
> kernel_init+0x24/0x124
> ret_from_fork+0x10/0x20
I am guessing you have CONFIG_PROVE_RAW_LOCK_NESTING enabled?
We are seeing a spinlock (base_crng.lock) taken while holding a
raw_spinlock (meta->lock).
kfence_guarded_alloc()
raw_spin_trylock_irqsave(&meta->lock, flags)
prandom_u32_max()
prandom_u32()
get_random_u32()
get_random_bytes()
_get_random_bytes()
crng_make_state()
spin_lock_irqsave(&base_crng.lock, flags);
I expect it is allowed to create kthreads via kthread_run() in
early_initcalls.
John Ogness
Powered by blists - more mailing lists