[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b7661a1-dbde-ea54-f880-99777c95ae22@kernel.dk>
Date: Thu, 2 Sep 2021 10:22:23 -0600
From: Jens Axboe <axboe@...nel.dk>
To: LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Lameter <cl@...ux.com>,
Linux Memory Management List <linux-mm@...ck.org>
Subject: slub: BUG: Invalid wait context
Hi,
Booting current -git yields the below splat. I'm assuming this is
related to the new RT stuff, where spin_lock() can sleep. This obviously
won't fly off IPI.
I'll leave the actual fix to others.
[ 1.430398] =============================
[ 1.430398] [ BUG: Invalid wait context ]
[ 1.430398] 5.14.0+ #11360 Not tainted
[ 1.430398] -----------------------------
[ 1.430533] swapper/0/0 is trying to lock:
[ 1.430743] ffff888100050918 (&n->list_lock){....}-{3:3}, at: deactivate_slab+0x213/0x540
[ 1.431171] other info that might help us debug this:
[ 1.431430] context-{2:2}
[ 1.431567] no locks held by swapper/0/0.
[ 1.431774] stack backtrace:
[ 1.431923] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.14.0+ #11360
[ 1.432246] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1.432826] Call Trace:
[ 1.432961] <IRQ>
[ 1.433071] dump_stack_lvl+0x45/0x59
[ 1.433273] __lock_acquire.cold+0x21a/0x34d
[ 1.433504] ? lock_chain_count+0x20/0x20
[ 1.433722] ? lockdep_hardirqs_on_prepare+0x1f0/0x1f0
[ 1.433990] ? __lock_acquire+0x86b/0x30b0
[ 1.434206] lock_acquire+0x157/0x3e0
[ 1.434399] ? deactivate_slab+0x213/0x540
[ 1.434615] ? lock_release+0x410/0x410
[ 1.434815] ? lockdep_hardirqs_on_prepare+0x1f0/0x1f0
[ 1.435081] ? mark_held_locks+0x65/0x90
[ 1.435286] ? lock_is_held_type+0x98/0x110
[ 1.435509] ? lock_is_held_type+0x98/0x110
[ 1.435728] _raw_spin_lock+0x2c/0x40
[ 1.435922] ? deactivate_slab+0x213/0x540
[ 1.436136] deactivate_slab+0x213/0x540
[ 1.436341] ? sched_clock_tick+0x49/0x80
[ 1.436556] ? lock_is_held_type+0x98/0x110
[ 1.436774] flush_cpu_slab+0x34/0x50
[ 1.436966] flush_smp_call_function_queue+0xf6/0x2c0
[ 1.437228] ? slub_cpu_dead+0xe0/0xe0
[ 1.437426] __sysvec_call_function_single+0x6b/0x280
[ 1.437691] sysvec_call_function_single+0x65/0x90
[ 1.437940] </IRQ>
[ 1.438053] asm_sysvec_call_function_single+0xf/0x20
[ 1.438314] RIP: 0010:default_idle+0x10/0x20
[ 1.438539] Code: ff f0 80 63 02 df 5b 41 5c c3 0f ae f0 0f ae 3b 0f ae f0 eb 90 0f 1f 44 00 00 0f 1f 44 00 00 eb 07 0f 00 2d 92 5d 45 00 fb f4 <c3> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 0f 1f 44 00 00 41
[ 1.439481] RSP: 0000:ffffffff82a07e60 EFLAGS: 00000206
[ 1.439605] RAX: 0000000000001811 RBX: ffffffff82a1f400 RCX: ffffffff81dbddc5
[ 1.439605] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff81dce145
[ 1.439605] RBP: 0000000000000000 R08: 0000000000000001 R09: ffff8881f7630b0b
[ 1.439605] R10: ffffed103eec6161 R11: 0000000000000000 R12: ffffffff8306c7b0
[ 1.439605] R13: 0000000000000000 R14: 0000000000000000 R15: 1ffffffff0540fd1
[ 1.439605] ? rcu_eqs_enter.constprop.0+0xa5/0xc0
[ 1.439605] ? default_idle_call+0x45/0xb0
[ 1.439605] default_idle_call+0x7d/0xb0
[ 1.439605] do_idle+0x31c/0x3d0
[ 1.439605] ? lock_downgrade+0x390/0x390
[ 1.439605] ? arch_cpu_idle_exit+0x40/0x40
[ 1.439605] cpu_startup_entry+0x19/0x20
[ 1.439605] start_kernel+0x38d/0x3ab
[ 1.439605] secondary_startup_64_no_verify+0xb0/0xbb
--
Jens Axboe
Powered by blists - more mailing lists