[<prev] [next>] [day] [month] [year] [list]
Message-ID: <000000000000921c0b061eecd080@google.com>
Date: Mon, 05 Aug 2024 03:05:20 -0700
From: syzbot <syzbot+ee7551b0640c5471e610@...kaller.appspotmail.com>
To: andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org,
daniel@...earbox.net, eddyz87@...il.com, haoluo@...gle.com,
john.fastabend@...il.com, jolsa@...nel.org, kpsingh@...nel.org,
linux-kernel@...r.kernel.org, martin.lau@...ux.dev, netdev@...r.kernel.org,
sdf@...ichev.me, song@...nel.org, syzkaller-bugs@...glegroups.com,
yonghong.song@...ux.dev
Subject: [syzbot] [bpf?] possible deadlock in htab_lock_bucket (2)
Hello,
syzbot found the following issue on:
HEAD commit: 3d650ab5e7d9 selftests/bpf: Fix a btf_dump selftest failure
git tree: bpf-next
console+strace: https://syzkaller.appspot.com/x/log.txt?x=1628e483980000
kernel config: https://syzkaller.appspot.com/x/.config?x=5efb917b1462a973
dashboard link: https://syzkaller.appspot.com/bug?extid=ee7551b0640c5471e610
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=117de4e5980000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=142e86bd980000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/630e210de8d9/disk-3d650ab5.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3576ca35748a/vmlinux-3d650ab5.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5b33f099abfa/bzImage-3d650ab5.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ee7551b0640c5471e610@...kaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.10.0-syzkaller-12666-g3d650ab5e7d9 #0 Not tainted
------------------------------------------------------
strace-static-x/5224 is trying to acquire lock:
ffff888024c4e218 (&htab->lockdep_key){....}-{2:2}, at: htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
but task is already holding lock:
ffff888023c33188 (&htab->lockdep_key#3){....}-{2:2}, at: htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&htab->lockdep_key#3){....}-{2:2}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
htab_lru_map_delete_elem+0x1f1/0x700 kernel/bpf/hashtab.c:1462
bpf_prog_6f5f05285f674219+0x43/0x4c
bpf_dispatcher_nop_func include/linux/bpf.h:1252 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2406 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2447
__traceiter_contention_begin+0x7b/0xb0 include/trace/events/lock.h:95
trace_contention_begin+0x117/0x140 include/trace/events/lock.h:95
__pv_queued_spin_lock_slowpath+0x114/0xdc0 kernel/locking/qspinlock.c:402
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x272/0x370 kernel/locking/spinlock_debug.c:116
htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
htab_lru_map_delete_elem+0x1f1/0x700 kernel/bpf/hashtab.c:1462
0xffffffffa000204b
bpf_dispatcher_nop_func include/linux/bpf.h:1252 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2406 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2447
__traceiter_contention_begin+0x7b/0xb0 include/trace/events/lock.h:95
trace_contention_begin+0xf5/0x120 include/trace/events/lock.h:95
__mutex_lock_common kernel/locking/mutex.c:610 [inline]
__mutex_lock+0x147/0xd70 kernel/locking/mutex.c:752
pipe_read+0x12a/0x13e0 fs/pipe.c:264
new_sync_read fs/read_write.c:395 [inline]
vfs_read+0x9bd/0xbc0 fs/read_write.c:476
ksys_read+0x1a0/0x2c0 fs/read_write.c:619
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&htab->lockdep_key){....}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
htab_lru_map_delete_elem+0x1f1/0x700 kernel/bpf/hashtab.c:1462
0xffffffffa000204b
bpf_dispatcher_nop_func include/linux/bpf.h:1252 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2406 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2447
__traceiter_contention_begin+0x7b/0xb0 include/trace/events/lock.h:95
trace_contention_begin+0x117/0x140 include/trace/events/lock.h:95
__pv_queued_spin_lock_slowpath+0x114/0xdc0 kernel/locking/qspinlock.c:402
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x272/0x370 kernel/locking/spinlock_debug.c:116
htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
htab_lru_map_delete_elem+0x1f1/0x700 kernel/bpf/hashtab.c:1462
bpf_prog_6f5f05285f674219+0x43/0x4c
bpf_dispatcher_nop_func include/linux/bpf.h:1252 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2406 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2447
__traceiter_contention_begin+0x7b/0xb0 include/trace/events/lock.h:95
trace_contention_begin+0xf5/0x120 include/trace/events/lock.h:95
__mutex_lock_common kernel/locking/mutex.c:610 [inline]
__mutex_lock+0x147/0xd70 kernel/locking/mutex.c:752
pipe_write+0x1c9/0x1a40 fs/pipe.c:455
new_sync_write fs/read_write.c:497 [inline]
vfs_write+0xa72/0xc90 fs/read_write.c:590
ksys_write+0x1a0/0x2c0 fs/read_write.c:643
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&htab->lockdep_key#3);
lock(&htab->lockdep_key);
lock(&htab->lockdep_key#3);
lock(&htab->lockdep_key);
*** DEADLOCK ***
4 locks held by strace-static-x/5224:
#0: ffff88802c11bc68 (&pipe->mutex){+.+.}-{3:3}, at: pipe_write+0x1c9/0x1a40 fs/pipe.c:455
#1: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
#1: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#1: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2405 [inline]
#1: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x1fc/0x540 kernel/trace/bpf_trace.c:2447
#2: ffff888023c33188 (&htab->lockdep_key#3){....}-{2:2}, at: htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
#3: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
#3: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
#3: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2405 [inline]
#3: ffffffff8e937660 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x1fc/0x540 kernel/trace/bpf_trace.c:2447
stack backtrace:
CPU: 0 UID: 0 PID: 5224 Comm: strace-static-x Not tainted 6.10.0-syzkaller-12666-g3d650ab5e7d9 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:93 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119
check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2186
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
htab_lock_bucket+0x1a4/0x370 kernel/bpf/hashtab.c:167
htab_lru_map_delete_elem+0x1f1/0x700 kernel/bpf/hashtab.c:1462
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@...glegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
Powered by blists - more mailing lists