[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251009224447.8479-1-hdanton@sina.com>
Date: Fri, 10 Oct 2025 06:44:46 +0800
From: Hillf Danton <hdanton@...a.com>
To: syzbot <syzbot+665739f456b28f32b23d@...kaller.appspotmail.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
linux-kernel@...r.kernel.org,
netdev@...r.kernel.org,
syzkaller-bugs@...glegroups.com,
virtualization@...ts.linux.dev
Subject: Re: [syzbot] [net?] [virt?] BUG: sleeping function called from invalid context in __set_page_owner
> Date: Thu, 09 Oct 2025 09:46:27 -0700 [thread overview]
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: ec714e371f22 Merge tag 'perf-tools-for-v6.18-1-2025-10-08'..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=174a4b34580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=db9c80a8900dca57
> dashboard link: https://syzkaller.appspot.com/bug?extid=665739f456b28f32b23d
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=140e0dcd980000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1581452f980000
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/6d5cce2bcf5d/disk-ec714e37.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/60dff1e3a58f/vmlinux-ec714e37.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/6a1823720b55/bzImage-ec714e37.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+665739f456b28f32b23d@...kaller.appspotmail.com
>
> BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6069, name: syz.0.17
> preempt_count: 1, expected: 0
> RCU nest depth: 2, expected: 2
> 5 locks held by syz.0.17/6069:
> #0: ffff888035808350 (sk_lock-AF_VSOCK){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1679 [inline]
> #0: ffff888035808350 (sk_lock-AF_VSOCK){+.+.}-{0:0}, at: vsock_connect+0x152/0xe20 net/vmw_vsock/af_vsock.c:1546
> #1: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #1: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> #1: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2074 [inline]
> #1: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run9+0x1ec/0x500 kernel/trace/bpf_trace.c:2123
> #2: ffff8880b8832c88 ((stream_local_lock)){+.+.}-{3:3}, at: bpf_stream_page_local_lock kernel/bpf/stream.c:46 [inline]
> #2: ffff8880b8832c88 ((stream_local_lock)){+.+.}-{3:3}, at: bpf_stream_elem_alloc kernel/bpf/stream.c:175 [inline]
> #2: ffff8880b8832c88 ((stream_local_lock)){+.+.}-{3:3}, at: __bpf_stream_push_str+0x211/0xbe0 kernel/bpf/stream.c:190
> #3: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #3: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> #3: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: __rt_spin_trylock kernel/locking/spinlock_rt.c:110 [inline]
> #3: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rt_spin_trylock+0x10d/0x2b0 kernel/locking/spinlock_rt.c:118
> #4: ffff8880b883f6e8 (&s->lock_key#5){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
> #4: ffff8880b883f6e8 (&s->lock_key#5){+.+.}-{3:3}, at: ___slab_alloc+0x12f/0x1470 mm/slub.c:4492
> Preemption disabled at:
> [<0000000000000000>] 0x0
> CPU: 0 UID: 0 PID: 6069 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> __might_resched+0x44b/0x5d0 kernel/sched/core.c:8925
> __rt_spin_lock kernel/locking/spinlock_rt.c:48 [inline]
> rt_spin_lock+0xc7/0x3e0 kernel/locking/spinlock_rt.c:57
> spin_lock include/linux/spinlock_rt.h:44 [inline]
Given atomic context enforced by bpf [1], this is another case that bpf makes
trouble.
[1] cant_sleep() in __bpf_trace_run()
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c#n2065
> ___slab_alloc+0x12f/0x1470 mm/slub.c:4492
> __slab_alloc+0xc6/0x1f0 mm/slub.c:4746
> __slab_alloc_node mm/slub.c:4822 [inline]
> slab_alloc_node mm/slub.c:5233 [inline]
> __kmalloc_cache_noprof+0xec/0x6c0 mm/slub.c:5719
> kmalloc_noprof include/linux/slab.h:957 [inline]
> add_stack_record_to_list mm/page_owner.c:172 [inline]
> inc_stack_record_count mm/page_owner.c:214 [inline]
> __set_page_owner+0x25c/0x490 mm/page_owner.c:333
> set_page_owner include/linux/page_owner.h:32 [inline]
> post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850
> prep_new_page mm/page_alloc.c:1858 [inline]
> get_page_from_freelist+0x28c0/0x2960 mm/page_alloc.c:3884
> alloc_frozen_pages_nolock_noprof+0xbc/0x150 mm/page_alloc.c:7595
> alloc_pages_nolock_noprof+0xa/0x30 mm/page_alloc.c:7628
> bpf_stream_page_replace+0x19/0x1e0 kernel/bpf/stream.c:86
> bpf_stream_page_reserve_elem kernel/bpf/stream.c:142 [inline]
> bpf_stream_elem_alloc kernel/bpf/stream.c:177 [inline]
> __bpf_stream_push_str+0x35c/0xbe0 kernel/bpf/stream.c:190
> bpf_stream_stage_printk+0x14e/0x1c0 kernel/bpf/stream.c:448
> bpf_prog_report_may_goto_violation+0xc4/0x190 kernel/bpf/core.c:3181
> bpf_check_timed_may_goto+0xaa/0xb0 kernel/bpf/core.c:3199
> arch_bpf_timed_may_goto+0x21/0x40 arch/x86/net/bpf_timed_may_goto.S:40
> bpf_prog_6fd842a53d323cc5+0x53/0x5f
> bpf_dispatcher_nop_func include/linux/bpf.h:1350 [inline]
> __bpf_prog_run include/linux/filter.h:721 [inline]
> bpf_prog_run include/linux/filter.h:728 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2075 [inline]
> bpf_trace_run9+0x2db/0x500 kernel/trace/bpf_trace.c:2123
> __bpf_trace_virtio_transport_alloc_pkt+0x2d7/0x340 include/trace/events/vsock_virtio_transport_common.h:39
> __do_trace_virtio_transport_alloc_pkt include/trace/events/vsock_virtio_transport_common.h:39 [inline]
> trace_virtio_transport_alloc_pkt include/trace/events/vsock_virtio_transport_common.h:39 [inline]
> virtio_transport_alloc_skb+0x10cc/0x1130 net/vmw_vsock/virtio_transport_common.c:311
> virtio_transport_send_pkt_info+0x6be/0x1100 net/vmw_vsock/virtio_transport_common.c:390
> virtio_transport_connect+0xa7/0x100 net/vmw_vsock/virtio_transport_common.c:1072
> vsock_connect+0xb8b/0xe20 net/vmw_vsock/af_vsock.c:1611
> __sys_connect_file net/socket.c:2102 [inline]
> __sys_connect+0x323/0x450 net/socket.c:2121
> __do_sys_connect net/socket.c:2127 [inline]
> __se_sys_connect net/socket.c:2124 [inline]
> __x64_sys_connect+0x7a/0x90 net/socket.c:2124
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
Powered by blists - more mailing lists