lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <69339dd6.050a0220.3a66f.000a.GAE@google.com>
Date: Fri, 05 Dec 2025 19:07:02 -0800
From: syzbot <syzbot+4235e4d7b6fd75704528@...kaller.appspotmail.com>
To: kartikey406@...il.com, linux-kernel@...r.kernel.org, 
	syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [f2fs?] INFO: task hung in f2fs_release_file (3)

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: task hung in f2fs_release_file

INFO: task syz.0.17:6702 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:24920 pid:6702  tgid:6702  ppid:6608   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
 __fput+0x45b/0xa80 fs/file_table.c:468
 task_work_run+0x1d4/0x260 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
 exit_to_user_mode_loop+0xff/0x4f0 kernel/entry/common.c:75
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
 do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0b2ee4f749
RSP: 002b:00007ffcf2baa8c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007f0b2f0a7da0 RCX: 00007f0b2ee4f749
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007f0b2f0a7da0 R08: 0000000000000000 R09: 00000006f2baabbf
R10: 00007f0b2f0a7cb0 R11: 0000000000000246 R12: 00000000000278cd
R13: 00007ffcf2baa9c0 R14: ffffffffffffffff R15: 00007ffcf2baa9e0
 </TASK>
INFO: task syz.0.17:6703 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:24488 pid:6703  tgid:6702  ppid:6608   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6960
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
 f2fs_issue_checkpoint+0x382/0x610 fs/f2fs/checkpoint.c:-1
 f2fs_unlink+0x5cb/0xa80 fs/f2fs/namei.c:603
 vfs_unlink+0x386/0x650 fs/namei.c:5369
 do_unlinkat+0x2cf/0x570 fs/namei.c:5439
 __do_sys_unlinkat fs/namei.c:5469 [inline]
 __se_sys_unlinkat fs/namei.c:5462 [inline]
 __x64_sys_unlinkat+0xd3/0xf0 fs/namei.c:5462
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0b2ee4f749
RSP: 002b:00007f0b2e4be038 EFLAGS: 00000246 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 00007f0b2f0a5fa0 RCX: 00007f0b2ee4f749
RDX: 0000000000000000 RSI: 0000200000000040 RDI: ffffffffffffff9c
RBP: 00007f0b2eed3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0b2f0a6038 R14: 00007f0b2f0a5fa0 R15: 00007ffcf2baa768
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/13:
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff88801f3ea0d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
1 lock held by khungtaskd/38:
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:3/58:
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000124fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000124fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff88803ac280d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:4/76:
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000155fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000155fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888028a020d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:11/1428:
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000572fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000572fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff8880395180d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:16/4387:
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140463938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000eb47b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000eb47b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888033fea0d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
2 locks held by getty/5564:
 #0: ffff88814ead00a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1400 drivers/tty/n_tty.c:2222
1 lock held by syz.0.17/6702:
 #0: ffff88805c1e9df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c1e9df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.0.17/6703:
 #0: ffff888039518480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c1e9478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c1e9478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c1e9478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c1e9478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c1e9df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c1e9df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:0/6725:
1 lock held by syz.1.19/6788:
 #0: ffff88805c1ecd78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c1ecd78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.1.19/6789:
 #0: ffff888033fea480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c1ec3f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c1ec3f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c1ec3f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c1ec3f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c1ecd78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c1ecd78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:1/6794:
1 lock held by syz.2.20/6816:
 #0: ffff88805c1ef378 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c1ef378 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.2.20/6817:
 #0: ffff88801f3ea480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c1ee9f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c1ee9f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c1ee9f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c1ee9f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c1ef378 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c1ef378 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:2/6821:
1 lock held by syz.3.21/6842:
 #0: ffff88805c271df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c271df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.3.21/6843:
 #0: ffff88803ac28480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c271478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c271478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c271478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c271478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c271df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c271df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
2 locks held by f2fs_ckpt-7:3/6847:
1 lock held by syz.4.22/6880:
 #0: ffff88805c079df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c079df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.4.22/6881:
 #0: ffff888028a02480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c079478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c079478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c079478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c079478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c079df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c079df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
4 locks held by f2fs_ckpt-7:4/6888:
1 lock held by syz.5.23/6922:
 #0: ffff88805c07a778 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c07a778 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.5.23/6923:
 #0: ffff88801f348480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c273a78 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c273a78 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c273a78 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c273a78 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c07a778 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c07a778 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
5 locks held by f2fs_ckpt-7:5/6928:
1 lock held by syz.6.24/6962:
 #0: ffff88805c276078 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff88805c276078 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.6.24/6963:
 #0: ffff888032e96480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c2756f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805c2756f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805c2756f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805c2756f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff88805c276078 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805c276078 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:6/6967:
3 locks held by syz.7.25/7007:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3c/0xf80 kernel/hung_task.c:495
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 6967 Comm: f2fs_ckpt-7:6 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:mark_lock+0xf/0x190 kernel/locking/lockdep.c:4722
Code: 24 e9 b6 fd ff ff 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 41 57 41 56 41 55 41 54 53 8b 46 20 89 c1 <81> e1 00 00 03 00 83 f9 01 bb 09 00 00 00 83 db 00 83 fa 08 0f 45
RSP: 0018:ffffc90004ecf5c8 EFLAGS: 00000006
RAX: 0000000000040c13 RBX: ffff88802557bc80 RCX: 0000000000040c13
RDX: 0000000000000006 RSI: ffff88802557c838 RDI: ffff88802557bc80
RBP: ffffc90004ecf6d0 R08: ffffffff8eda5877 R09: 1ffffffff1db4b0e
R10: dffffc0000000000 R11: fffffbfff1db4b0f R12: ffff88802557c888
R13: 0000000000000a02 R14: ffff88802557c838 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff888126e52000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f190c4a5000 CR3: 000000003af82000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 mark_held_locks kernel/locking/lockdep.c:4325 [inline]
 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4351 [inline]
 lockdep_hardirqs_on_prepare+0x191/0x290 kernel/locking/lockdep.c:4410
 trace_hardirqs_on+0x28/0x40 kernel/trace/trace_preemptirq.c:78
 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
 _raw_spin_unlock_irqrestore+0x85/0x110 kernel/locking/spinlock.c:194
 raw_spin_unlock_irqrestore_wake include/linux/sched/wake_q.h:94 [inline]
 rtlock_slowlock kernel/locking/rtmutex.c:1896 [inline]
 rtlock_lock kernel/locking/spinlock_rt.c:43 [inline]
 __rt_spin_lock kernel/locking/spinlock_rt.c:49 [inline]
 rt_spin_lock+0x16d/0x3e0 kernel/locking/spinlock_rt.c:57
 spin_lock include/linux/spinlock_rt.h:44 [inline]
 f2fs_sync_inode_meta fs/f2fs/checkpoint.c:1142 [inline]
 block_operations fs/f2fs/checkpoint.c:1265 [inline]
 f2fs_write_checkpoint+0xa78/0x2450 fs/f2fs/checkpoint.c:1680
 __write_checkpoint_sync fs/f2fs/checkpoint.c:1804 [inline]
 __checkpoint_and_complete_reqs+0xdf/0x3d0 fs/f2fs/checkpoint.c:1823
 issue_checkpoint_thread+0xd9/0x260 fs/f2fs/checkpoint.c:1855
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006


Tested on:

commit:         3af870ae nfs/localio: fix regression due to out-of-ord..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=155c821a580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=83a7cfc766b11a4f
dashboard link: https://syzkaller.appspot.com/bug?extid=4235e4d7b6fd75704528
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
patch:          https://syzkaller.appspot.com/x/patch.diff?x=176a3c1a580000


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ