lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <6933c628.050a0220.3a66f.000c.GAE@google.com>
Date: Fri, 05 Dec 2025 21:59:04 -0800
From: syzbot <syzbot+4235e4d7b6fd75704528@...kaller.appspotmail.com>
To: kartikey406@...il.com, linux-kernel@...r.kernel.org, 
	syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [f2fs?] INFO: task hung in f2fs_release_file (3)

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: task hung in f2fs_release_file

INFO: task syz.0.17:6686 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:26840 pid:6686  tgid:6686  ppid:6590   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
 __fput+0x45b/0xa80 fs/file_table.c:468
 task_work_run+0x1d4/0x260 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
 exit_to_user_mode_loop+0xff/0x4f0 kernel/entry/common.c:75
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
 do_syscall_64+0x2e3/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7febda5ff749
RSP: 002b:00007ffd51e7dc88 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007febda857da0 RCX: 00007febda5ff749
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007febda857da0 R08: 0000000000000000 R09: 0000000651e7df7f
R10: 00007febda857cb0 R11: 0000000000000246 R12: 0000000000028cbd
R13: 00007ffd51e7dd80 R14: ffffffffffffffff R15: 00007ffd51e7dda0
 </TASK>
INFO: task syz.0.17:6687 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:24480 pid:6687  tgid:6686  ppid:6590   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6960
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
 f2fs_issue_checkpoint+0x382/0x610 fs/f2fs/checkpoint.c:-1
 f2fs_unlink+0x5cb/0xa80 fs/f2fs/namei.c:603
 vfs_unlink+0x386/0x650 fs/namei.c:5369
 do_unlinkat+0x2cf/0x570 fs/namei.c:5439
 __do_sys_unlinkat fs/namei.c:5469 [inline]
 __se_sys_unlinkat fs/namei.c:5462 [inline]
 __x64_sys_unlinkat+0xd3/0xf0 fs/namei.c:5462
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7febda5ff749
RSP: 002b:00007febd9c66038 EFLAGS: 00000246 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 00007febda855fa0 RCX: 00007febda5ff749
RDX: 0000000000000000 RSI: 0000200000000040 RDI: ffffffffffffff9c
RBP: 00007febda683f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007febda856038 R14: 00007febda855fa0 R15: 00007ffd51e7db28
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:0/12:
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000117b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000117b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888028f680d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:1/13:
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff88803ac680d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
1 lock held by khungtaskd/38:
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5aecc0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:5/85:
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000155fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000155fb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff88802a0e80d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:15/3524:
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000e7dfb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000e7dfb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888029cae0d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
3 locks held by kworker/u8:17/3751:
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888140474138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000eddfb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000eddfb80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888035a5a0d0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
2 locks held by getty/5557:
 #0: ffff8880351a20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1400 drivers/tty/n_tty.c:2222
3 locks held by kworker/1:4/6677:
1 lock held by syz.0.17/6686:
 #0: ffff888057251df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff888057251df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.0.17/6687:
 #0: ffff88803ac68480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888057251478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888057251478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888057251478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888057251478 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff888057251df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff888057251df8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:0/6709:
1 lock held by syz.1.18/6765:
 #0: ffff8880573f1478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880573f1478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.1.18/6770:
 #0: ffff88802a0e8480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff8880573f0af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff8880573f0af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff8880573f0af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff8880573f0af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880573f1478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880573f1478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
6 locks held by f2fs_ckpt-7:1/6771:
1 lock held by syz.2.19/6795:
 #0: ffff8880573f3a78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880573f3a78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.2.19/6796:
 #0: ffff888028f68480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff8880573f30f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff8880573f30f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff8880573f30f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff8880573f30f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880573f3a78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880573f3a78 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:2/6800:
1 lock held by syz.3.20/6824:
 #0: ffff8880573f43f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880573f43f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.3.20/6825:
 #0: ffff888035a5a480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff8880572543f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff8880572543f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff8880572543f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff8880572543f8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880573f43f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880573f43f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:3/6829:
1 lock held by syz.4.21/6865:
 #0: ffff8880572569f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880572569f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.4.21/6867:
 #0: ffff888029cae480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888057256078 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888057256078 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888057256078 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888057256078 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880572569f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880572569f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:4/6872:
1 lock held by syz.5.22/6903:
 #0: ffff8880494a9478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880494a9478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.5.22/6904:
 #0: ffff888058642480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff8880494a8af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff8880494a8af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff8880494a8af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff8880494a8af8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880494a9478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880494a9478 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:5/6908:
1 lock held by syz.6.23/6944:
 #0: ffff8880494ab0f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #0: ffff8880494ab0f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: f2fs_release_file+0xe3/0x150 fs/f2fs/file.c:2063
3 locks held by syz.6.23/6949:
 #0: ffff888028fec480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff8880494aa778 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff8880494aa778 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff8880494aa778 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff8880494aa778 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 #2: ffff8880494ab0f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff8880494ab0f8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: vfs_unlink+0xef/0x650 fs/namei.c:5355
3 locks held by f2fs_ckpt-7:6/6950:
3 locks held by syz.7.24/6990:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3c/0xf80 kernel/hung_task.c:495
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 6771 Comm: f2fs_ckpt-7:1 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:raw_atomic_fetch_add_unless include/linux/atomic/atomic-arch-fallback.h:2429 [inline]
RIP: 0010:raw_atomic_add_unless include/linux/atomic/atomic-arch-fallback.h:2456 [inline]
RIP: 0010:atomic_add_unless include/linux/atomic/atomic-instrumented.h:1518 [inline]
RIP: 0010:page_ref_add_unless include/linux/page_ref.h:238 [inline]
RIP: 0010:folio_ref_add_unless include/linux/page_ref.h:248 [inline]
RIP: 0010:folio_try_get+0xf2/0x340 include/linux/page_ref.h:264
Code: c7 ff 49 83 c6 34 4c 89 f7 be 04 00 00 00 e8 d5 96 29 00 4c 89 f0 48 c1 e8 03 42 0f b6 04 20 84 c0 0f 85 03 01 00 00 45 8b 3e <31> ff 44 89 fe e8 d4 39 c7 ff 45 85 ff 0f 84 e1 00 00 00 41 8d 4f
RSP: 0018:ffffc90004257450 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffffffff81f8fc0c RCX: ffffffff81f8fccb
RDX: 0000000000000001 RSI: 0000000000000004 RDI: ffffea0000c35774
RBP: 0000000000000001 R08: ffffea0000c35777 R09: 1ffffd4000186aee
R10: dffffc0000000000 R11: fffff94000186aef R12: dffffc0000000000
R13: dffffc0000000000 R14: ffffea0000c35774 R15: 0000000000000002
FS:  0000000000000000(0000) GS:ffff888126d52000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1e3d701000 CR3: 0000000024ada000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 filemap_get_entry+0x1b8/0x2f0 mm/filemap.c:1905
 __filemap_get_folio_mpol+0x3c/0xa50 mm/filemap.c:1941
 __filemap_get_folio include/linux/pagemap.h:763 [inline]
 f2fs_grab_cache_folio+0x2e/0x380 fs/f2fs/f2fs.h:2935
 __get_node_folio+0x18e/0x14d0 fs/f2fs/node.c:1551
 f2fs_update_inode_page+0x82/0x190 fs/f2fs/inode.c:766
 f2fs_sync_inode_meta fs/f2fs/checkpoint.c:1160 [inline]
 block_operations fs/f2fs/checkpoint.c:1269 [inline]
 f2fs_write_checkpoint+0xc6f/0x2710 fs/f2fs/checkpoint.c:1684
 __write_checkpoint_sync fs/f2fs/checkpoint.c:1808 [inline]
 __checkpoint_and_complete_reqs+0xdf/0x3d0 fs/f2fs/checkpoint.c:1827
 issue_checkpoint_thread+0xd9/0x260 fs/f2fs/checkpoint.c:1859
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006
F2FS-fs (loop7): inject read IO error in f2fs_read_end_io of blk_update_request+0x57e/0xe60 block/blk-mq.c:1006


Tested on:

commit:         416f99c3 Merge tag 'driver-core-6.19-rc1' of git://git..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=12f53c1a580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=8c04d4527fc98ffa
dashboard link: https://syzkaller.appspot.com/bug?extid=4235e4d7b6fd75704528
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
patch:          https://syzkaller.appspot.com/x/patch.diff?x=15d802c2580000


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ