lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bza8iH4_JY_sN-6GYeSfn6iuUsLMzxd=xRkCC7q-3_StNQ@mail.gmail.com>
Date: Tue, 12 Mar 2024 15:37:16 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Jiri Olsa <olsajiri@...il.com>
Cc: syzbot <syzbot+850aaf14624dc0c6d366@...kaller.appspotmail.com>, 
	andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org, daniel@...earbox.net, 
	haoluo@...gle.com, john.fastabend@...il.com, kpsingh@...nel.org, 
	linux-kernel@...r.kernel.org, martin.lau@...ux.dev, netdev@...r.kernel.org, 
	sdf@...gle.com, song@...nel.org, syzkaller-bugs@...glegroups.com, 
	yonghong.song@...ux.dev
Subject: Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve

On Tue, Mar 12, 2024 at 2:18 PM Jiri Olsa <olsajiri@...il.com> wrote:
>
> On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> > On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > > git tree:       bpf
> > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+850aaf14624dc0c6d366@...kaller.appspotmail.com
> > >
> > > ============================================
> > > WARNING: possible recursive locking detected
> > > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > > --------------------------------------------
> > > strace-static-x/5063 is trying to acquire lock:
> > > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > but task is already holding lock:
> > > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > other info that might help us debug this:
> > >  Possible unsafe locking scenario:
> > >
> > >        CPU0
> > >        ----
> > >   lock(&rb->spinlock);
> > >   lock(&rb->spinlock);
> > >
> > >  *** DEADLOCK ***
> > >
> > >  May be due to missing lock nesting notation
> > >
> > > 4 locks held by strace-static-x/5063:
> > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > >  #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > >
> > > stack backtrace:
> > > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > > Call Trace:
> > >  <TASK>
> > >  __dump_stack lib/dump_stack.c:88 [inline]
> > >  dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > >  check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > >  validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > >  __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > >  lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > >  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > >  bpf_prog_9efe54833449f08e+0x2d/0x47
> > >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > >  __bpf_prog_run include/linux/filter.h:651 [inline]
> > >  bpf_prog_run include/linux/filter.h:658 [inline]
> > >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> >
> > hum, scratching my head how this could passed through the prog->active check,
>
> nah could be 2 instances of the same program, got confused by the tag
>
> trace_contention_end
>   __bpf_trace_run(prog1)
>     bpf_prog_9efe54833449f08e
>       bpf_ringbuf_reserve
>         trace_contention_end
>           __bpf_trace_run(prog1)  prog1->active check fails
>           __bpf_trace_run(prog2)
>             bpf_prog_9efe54833449f08e
>               bpf_ringbuf_reserve
>                 lockup
>
> we had similar issue in [1] and we replaced the lock with extra buffers,
> not sure that's possible in bpf_ringbuf_reserve
>

Having trace_contention_begin and trace_contention_end in such
low-level parts of ringbuf (and I'm sure anything in BPF that's using
spinlock) is unfortunate. I'm not sure what's the best solution, but
it would be great if we had ability to disable these tracepoints when
taking lock in low-level BPF infrastructure. Given BPF programs can
attach to these tracepoints, it's best to avoid this arbitrary nesting
of BPF ringbuf calls. Also note, no per-program protection will help,
because it can be independent BPF programs using the same map.


> jirka
>
>
> [1] e2bb9e01d589 bpf: Remove trace_printk_lock
>
> > will try to reproduce
> >
> > jirka
> >
> > >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > >  trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> > >  __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> > >  pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> > >  queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> > >  queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> > >  do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> > >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> > >  _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> > >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > >  bpf_prog_9efe54833449f08e+0x2d/0x47
> > >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > >  __bpf_prog_run include/linux/filter.h:651 [inline]
> > >  bpf_prog_run include/linux/filter.h:658 [inline]
> > >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > >  trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> > >  __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> > >  __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> > >  __pipe_lock fs/pipe.c:103 [inline]
> > >  pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > >  call_write_iter include/linux/fs.h:2087 [inline]
> > >  new_sync_write fs/read_write.c:497 [inline]
> > >  vfs_write+0xa81/0xcb0 fs/read_write.c:590
> > >  ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> > >  do_syscall_64+0xf9/0x240
> > >  entry_SYSCALL_64_after_hwframe+0x6f/0x77
> > > RIP: 0033:0x4e8593
> > > Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> > > RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> > > RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> > > RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> > > RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> > > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> > > R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> > >  </TASK>
> > >
> > >
> > > ---
> > > This report is generated by a bot. It may contain errors.
> > > See https://goo.gl/tpsmEJ for more information about syzbot.
> > > syzbot engineers can be reached at syzkaller@...glegroups.com.
> > >
> > > syzbot will keep track of this issue. See:
> > > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > >
> > > If the report is already addressed, let syzbot know by replying with:
> > > #syz fix: exact-commit-title
> > >
> > > If you want syzbot to run the reproducer, reply with:
> > > #syz test: git://repo/address.git branch-or-commit-hash
> > > If you attach or paste a git patch, syzbot will apply it before testing.
> > >
> > > If you want to overwrite report's subsystems, reply with:
> > > #syz set subsystems: new-subsystem
> > > (See the list of subsystem names on the web dashboard)
> > >
> > > If the report is a duplicate of another one, reply with:
> > > #syz dup: exact-subject-of-another-report
> > >
> > > If you want to undo deduplication, reply with:
> > > #syz undup

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ