[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <000000000000d605280618aebfb8@google.com>
Date: Fri, 17 May 2024 16:31:22 -0700
From: syzbot <syzbot+0b95946cd0588e2ad0f5@...kaller.appspotmail.com>
To: andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org,
daniel@...earbox.net, davem@...emloft.net, edumazet@...gle.com,
jakub@...udflare.com, john.fastabend@...il.com, kuba@...nel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org, pabeni@...hat.com,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [bpf?] [net?] possible deadlock in sock_hash_update_common
syzbot has found a reproducer for the following issue on:
HEAD commit: 71ed6c266348 bpf: Fix order of args in call to bpf_map_kvc..
git tree: bpf-next
console+strace: https://syzkaller.appspot.com/x/log.txt?x=17554e3f180000
kernel config: https://syzkaller.appspot.com/x/.config?x=bd214b7accd7fc53
dashboard link: https://syzkaller.appspot.com/bug?extid=0b95946cd0588e2ad0f5
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1515d8b2980000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=167b60dc980000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/5802d805367c/disk-71ed6c26.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/463c507f7ca0/vmlinux-71ed6c26.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0958a8d8b793/bzImage-71ed6c26.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0b95946cd0588e2ad0f5@...kaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc7-syzkaller-02064-g71ed6c266348 #0 Not tainted
------------------------------------------------------
syz-executor469/5083 is trying to acquire lock:
ffff88801ba8c2b0 (&psock->link_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff88801ba8c2b0 (&psock->link_lock){+...}-{2:2}, at: sock_map_add_link net/core/sock_map.c:146 [inline]
ffff88801ba8c2b0 (&psock->link_lock){+...}-{2:2}, at: sock_hash_update_common+0x624/0xa30 net/core/sock_map.c:1041
but task is already holding lock:
ffff88801a299520 (&htab->buckets[i].lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff88801a299520 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1025
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&htab->buckets[i].lock){+...}-{2:2}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_delete_elem+0x17c/0x400 net/core/sock_map.c:957
bpf_prog_78b015942f8c5b4e+0x63/0x67
bpf_dispatcher_nop_func include/linux/bpf.h:1243 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2403 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2444
trace_kfree include/trace/events/kmem.h:94 [inline]
kfree+0x2bd/0x3b0 mm/slub.c:4383
sk_psock_free_link include/linux/skmsg.h:425 [inline]
sock_map_del_link net/core/sock_map.c:170 [inline]
sock_map_unref+0x3ac/0x5e0 net/core/sock_map.c:192
sock_map_update_common+0x4f0/0x5b0 net/core/sock_map.c:518
sock_map_update_elem_sys+0x55f/0x910 net/core/sock_map.c:594
map_update_elem+0x53a/0x6f0 kernel/bpf/syscall.c:1654
__sys_bpf+0x76f/0x810 kernel/bpf/syscall.c:5670
__do_sys_bpf kernel/bpf/syscall.c:5789 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5787 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5787
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&psock->link_lock){+...}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_map_add_link net/core/sock_map.c:146 [inline]
sock_hash_update_common+0x624/0xa30 net/core/sock_map.c:1041
sock_map_update_elem_sys+0x5a4/0x910 net/core/sock_map.c:596
map_update_elem+0x53a/0x6f0 kernel/bpf/syscall.c:1654
__sys_bpf+0x76f/0x810 kernel/bpf/syscall.c:5670
__do_sys_bpf kernel/bpf/syscall.c:5789 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5787 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5787
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
lock(&psock->link_lock);
lock(&htab->buckets[i].lock);
lock(&psock->link_lock);
*** DEADLOCK ***
3 locks held by syz-executor469/5083:
#0: ffff88807e797258 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1595 [inline]
#0: ffff88807e797258 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: sock_map_sk_acquire net/core/sock_map.c:129 [inline]
#0: ffff88807e797258 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: sock_map_update_elem_sys+0x1cc/0x910 net/core/sock_map.c:590
#1: ffffffff8e334ea0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#1: ffffffff8e334ea0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#1: ffffffff8e334ea0 (rcu_read_lock){....}-{1:2}, at: sock_map_sk_acquire net/core/sock_map.c:130 [inline]
#1: ffffffff8e334ea0 (rcu_read_lock){....}-{1:2}, at: sock_map_update_elem_sys+0x1d8/0x910 net/core/sock_map.c:590
#2: ffff88801a299520 (&htab->buckets[i].lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
#2: ffff88801a299520 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1025
stack backtrace:
CPU: 1 PID: 5083 Comm: syz-executor469 Not tainted 6.9.0-rc7-syzkaller-02064-g71ed6c266348 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_map_add_link net/core/sock_map.c:146 [inline]
sock_hash_update_common+0x624/0xa30 net/core/sock_map.c:1041
sock_map_update_elem_sys+0x5a4/0x910 net/core/sock_map.c:596
map_update_elem+0x53a/0x6f0 kernel/bpf/syscall.c:1654
__sys_bpf+0x76f/0x810 kernel/bpf/syscall.c:5670
__do_sys_bpf kernel/bpf/syscall.c:5789 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5787 [inline]
__x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5787
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f98a7323a69
Code: 48 83 c4 28 c3 e8 37 17 00 00 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffea2336c68 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Powered by blists - more mailing lists