[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000000000007aa62a0619e8c330@google.com>
Date: Sun, 02 Jun 2024 07:09:22 -0700
From: syzbot <syzbot+08b113332e19a9378dd5@...kaller.appspotmail.com>
To: brauner@...nel.org, gregkh@...uxfoundation.org, jack@...e.cz,
kent.overstreet@...ux.dev, linkinjeon@...nel.org,
linux-bcachefs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, sj1557.seo@...sung.com,
syzkaller-bugs@...glegroups.com, tj@...nel.org, viro@...iv.linux.org.uk
Subject: [syzbot] [kernfs?] [bcachefs?] [exfat?] INFO: task hung in
do_unlinkat (5)
Hello,
syzbot found the following issue on:
HEAD commit: b6394d6f7159 Merge tag 'pull-misc' of git://git.kernel.org..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=146989a4980000
kernel config: https://syzkaller.appspot.com/x/.config?x=713476114e57eef3
dashboard link: https://syzkaller.appspot.com/bug?extid=08b113332e19a9378dd5
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/e8e1377d4772/disk-b6394d6f.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/19fbbb3b6dd5/vmlinux-b6394d6f.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4dcce16af95d/bzImage-b6394d6f.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+08b113332e19a9378dd5@...kaller.appspotmail.com
INFO: task syz-executor.2:9894 blocked for more than 143 seconds.
Not tainted 6.9.0-syzkaller-10729-gb6394d6f7159 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2 state:D stack:24688 pid:9894 tgid:9893 ppid:9055 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5408 [inline]
__schedule+0x1796/0x4a00 kernel/sched/core.c:6745
__schedule_loop kernel/sched/core.c:6822 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6837
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
rwsem_down_write_slowpath+0xeeb/0x13b0 kernel/locking/rwsem.c:1178
__down_write_common+0x1af/0x200 kernel/locking/rwsem.c:1306
inode_lock_nested include/linux/fs.h:826 [inline]
do_unlinkat+0x26a/0x830 fs/namei.c:4394
__do_sys_unlink fs/namei.c:4455 [inline]
__se_sys_unlink fs/namei.c:4453 [inline]
__x64_sys_unlink+0x49/0x60 fs/namei.c:4453
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7faae4e7cee9
RSP: 002b:00007faae5c910c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007faae4fabf80 RCX: 00007faae4e7cee9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000a80
RBP: 00007faae4ec949e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007faae4fabf80 R15: 00007ffee8d5b6f8
</TASK>
Showing all locks held in the system:
3 locks held by kworker/u8:1/12:
1 lock held by khungtaskd/30:
#0: ffffffff8e333d20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#0: ffffffff8e333d20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#0: ffffffff8e333d20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by kworker/u8:7/2801:
2 locks held by getty/4839:
#0: ffff88802f9390a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90002f162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
2 locks held by syz-fuzzer/5091:
3 locks held by kworker/1:6/5155:
#0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
#0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc90003a8fd00 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc90003a8fd00 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8e3390f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#2: ffffffff8e3390f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:939
2 locks held by syz-executor.2/9894:
#0: ffff888063566420 (sb_writers#35){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
#1: ffff88805ef856a8 (&type->i_mutex_dir_key#29/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
#1: ffff88805ef856a8 (&type->i_mutex_dir_key#29/1){+.+.}-{3:3}, at: do_unlinkat+0x26a/0x830 fs/namei.c:4394
2 locks held by syz-executor.2/9897:
3 locks held by syz-executor.4/11013:
2 locks held by syz-executor.4/11026:
#0: ffff888021986420 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:3900
1 lock held by syz-executor.4/11030:
#0: ffff88805505afe0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:801 [inline]
#0: ffff88805505afe0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: lookup_slow+0x45/0x70 fs/namei.c:1708
2 locks held by syz-executor.4/11037:
#0: ffff888021986420 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:3900
2 locks held by syz-executor.4/11038:
#0: ffff888021986420 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:801 [inline]
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: open_last_lookups fs/namei.c:3573 [inline]
#1: ffff88805505afe0 (&type->i_mutex_dir_key#20){++++}-{3:3}, at: path_openat+0x7c4/0x3280 fs/namei.c:3804
4 locks held by kworker/0:2/11663:
#0: ffff8880b943e798 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:559
#1: ffff8880b9428948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x441/0x770 kernel/sched/psi.c:988
#2: ffff8880b942a718 (&base->lock){-.-.}-{2:2}, at: __debug_check_no_obj_freed lib/debugobjects.c:978 [inline]
#2: ffff8880b942a718 (&base->lock){-.-.}-{2:2}, at: debug_check_no_obj_freed+0x234/0x580 lib/debugobjects.c:1019
#3: ffffffff94a429d8 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x16d/0x510 lib/debugobjects.c:708
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@...glegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
Powered by blists - more mailing lists