[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e06d69c9-f067-45c6-b604-fd340c3bd612@suse.cz>
Date: Wed, 30 Oct 2024 22:48:50 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: paulmck@...nel.org, linux-next@...r.kernel.org,
linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com, linux-mm@...ck.org
Cc: sfr@...b.auug.org.au, bigeasy@...utronix.de, longman@...hat.com,
boqun.feng@...il.com, elver@...gle.com, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo.kim@....com, akpm@...ux-foundation.org
Subject: Re: [BUG] -next lockdep invalid wait context
On 10/30/24 22:05, Paul E. McKenney wrote:
> Hello!
Hi!
> The next-20241030 release gets the splat shown below when running
> scftorture in a preemptible kernel. This bisects to this commit:
>
> 560af5dc839e ("lockdep: Enable PROVE_RAW_LOCK_NESTING with PROVE_LOCKING")
>
> Except that all this is doing is enabling lockdep to find the problem.
>
> The obvious way to fix this is to make the kmem_cache structure's
> cpu_slab field's ->lock be a raw spinlock, but this might not be what
> we want for real-time response.
But it's a local_lock, not spinlock and it's doing local_lock_irqsave(). I'm
confused what's happening here, the code has been like this for years now.
> This can be reproduced deterministically as follows:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --torture scf --allcpus --duration 2 --configs PREEMPT --kconfig CONFIG_NR_CPUS=64 --memory 7G --trust-make --kasan --bootargs "scftorture.nthreads=64 torture.disable_onoff_at_boot csdlock_debug=1"
>
> I doubt that the number of CPUs or amount of memory makes any difference,
> but that is what I used.
>
> Thoughts?
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> [ 35.659746] =============================
> [ 35.659746] [ BUG: Invalid wait context ]
> [ 35.659746] 6.12.0-rc5-next-20241029 #57233 Not tainted
> [ 35.659746] -----------------------------
> [ 35.659746] swapper/37/0 is trying to lock:
> [ 35.659746] ffff8881ff4bf2f0 (&c->lock){....}-{3:3}, at: put_cpu_partial+0x49/0x1b0
> [ 35.659746] other info that might help us debug this:
> [ 35.659746] context-{2:2}
> [ 35.659746] no locks held by swapper/37/0.
> [ 35.659746] stack backtrace:
> [ 35.659746] CPU: 37 UID: 0 PID: 0 Comm: swapper/37 Not tainted 6.12.0-rc5-next-20241029 #57233
> [ 35.659746] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 35.659746] Call Trace:
> [ 35.659746] <IRQ>
> [ 35.659746] dump_stack_lvl+0x68/0xa0
> [ 35.659746] __lock_acquire+0x8fd/0x3b90
> [ 35.659746] ? start_secondary+0x113/0x210
> [ 35.659746] ? __pfx___lock_acquire+0x10/0x10
> [ 35.659746] ? __pfx___lock_acquire+0x10/0x10
> [ 35.659746] ? __pfx___lock_acquire+0x10/0x10
> [ 35.659746] ? __pfx___lock_acquire+0x10/0x10
> [ 35.659746] lock_acquire+0x19b/0x520
> [ 35.659746] ? put_cpu_partial+0x49/0x1b0
> [ 35.659746] ? __pfx_lock_acquire+0x10/0x10
> [ 35.659746] ? __pfx_lock_release+0x10/0x10
> [ 35.659746] ? lock_release+0x20f/0x6f0
> [ 35.659746] ? __pfx_lock_release+0x10/0x10
> [ 35.659746] ? lock_release+0x20f/0x6f0
> [ 35.659746] ? kasan_save_track+0x14/0x30
> [ 35.659746] put_cpu_partial+0x52/0x1b0
> [ 35.659746] ? put_cpu_partial+0x49/0x1b0
> [ 35.659746] ? __pfx_scf_handler_1+0x10/0x10
> [ 35.659746] __flush_smp_call_function_queue+0x2d2/0x600
How did we even get to put_cpu_partial directly from flushing smp calls?
SLUB doesn't use them, it uses queue_work_on)_ for flushing and that
flushing doesn't involve put_cpu_partial() AFAIK.
I think only slab allocation or free can lead to put_cpu_partial() that
would mean the backtrace is missing something. And that somebody does a slab
alloc/free from a smp callback, which I'd then assume isn't allowed?
> [ 35.659746] __sysvec_call_function_single+0x50/0x280
> [ 35.659746] sysvec_call_function_single+0x6b/0x80
> [ 35.659746] </IRQ>
> [ 35.659746] <TASK>
> [ 35.659746] asm_sysvec_call_function_single+0x1a/0x20
> [ 35.659746] RIP: 0010:default_idle+0xf/0x20
> [ 35.659746] Code: 4c 01 c7 4c 29 c2 e9 72 ff ff ff 90 90 90 90 90 90 90 90 90
> 90 90 90 90 90 90 90 f3 0f 1e fa eb 07 0f 00 2d 33 80 3e 00 fb f4 <fa> c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90
> [ 35.659746] RSP: 0018:ffff888100a9fe68 EFLAGS: 00000202
> [ 35.659746] RAX: 0000000000040d75 RBX: 0000000000000025 RCX: ffffffffab83df45
> [ 35.659746] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa8a5f7ba
> [ 35.659746] RBP: dffffc0000000000 R08: 0000000000000001 R09: ffffed103fe96c3c
> [ 35.659746] R10: ffff8881ff4b61e3 R11: 0000000000000000 R12: ffffffffad13f1d0
> [ 35.659746] R13: 1ffff11020153fd2 R14: 0000000000000000 R15: 0000000000000000
> [ 35.659746] ? ct_kernel_exit.constprop.0+0xc5/0xf0
> [ 35.659746] ? do_idle+0x2fa/0x3b0
> [ 35.659746] default_idle_call+0x6d/0xb0
> [ 35.659746] do_idle+0x2fa/0x3b0
> [ 35.659746] ? __pfx_do_idle+0x10/0x10
> [ 35.659746] cpu_startup_entry+0x4f/0x60
> [ 35.659746] start_secondary+0x1bc/0x210
> [ 35.659746] common_startup_64+0x12c/0x138
> [ 35.659746] </TASK>
Powered by blists - more mailing lists