[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000000000009731e206189f9e5b@google.com>
Date: Thu, 16 May 2024 22:28:19 -0700
From: syzbot <syzbot+016b09736213e65d106e@...kaller.appspotmail.com>
To: almaz.alexandrovich@...agon-software.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, ntfs3@...ts.linux.dev,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [ntfs3?] possible deadlock in ntfs_mark_rec_free (2)
syzbot has found a reproducer for the following issue on:
HEAD commit: fda5695d692c Merge branch 'for-next/core' into for-kernelci
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=15248fb8980000
kernel config: https://syzkaller.appspot.com/x/.config?x=95dc1de8407c7270
dashboard link: https://syzkaller.appspot.com/bug?extid=016b09736213e65d106e
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13787684980000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10a93c92980000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/07f3214ff0d9/disk-fda5695d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/70e2e2c864e8/vmlinux-fda5695d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b259942a16dc/Image-fda5695d.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/0c9ec56039c3/mount_0.gz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+016b09736213e65d106e@...kaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc7-syzkaller-gfda5695d692c #0 Not tainted
------------------------------------------------------
kworker/u8:7/652 is trying to acquire lock:
ffff0000d80fa128 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x48/0x270 fs/ntfs3/fsntfs.c:742
but task is already holding lock:
ffff0000decb6fa0 (&ni->ni_lock#3){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1143 [inline]
ffff0000decb6fa0 (&ni->ni_lock#3){+.+.}-{3:3}, at: ni_write_inode+0x168/0xda4 fs/ntfs3/frecord.c:3265
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&ni->ni_lock#3){+.+.}-{3:3}:
__mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:608
__mutex_lock kernel/locking/mutex.c:752 [inline]
mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:804
ntfs_set_state+0x1a4/0x5c0 fs/ntfs3/fsntfs.c:947
mi_read+0x3e0/0x4d8 fs/ntfs3/record.c:185
mi_format_new+0x174/0x514 fs/ntfs3/record.c:420
ni_add_subrecord+0xd0/0x3c4 fs/ntfs3/frecord.c:372
ntfs_look_free_mft+0x4c8/0xd1c fs/ntfs3/fsntfs.c:715
ni_create_attr_list+0x764/0xf54 fs/ntfs3/frecord.c:876
ni_ins_attr_ext+0x300/0xa0c fs/ntfs3/frecord.c:974
ni_insert_attr fs/ntfs3/frecord.c:1141 [inline]
ni_insert_resident fs/ntfs3/frecord.c:1525 [inline]
ni_add_name+0x658/0xc14 fs/ntfs3/frecord.c:3047
ni_rename+0xc8/0x1d8 fs/ntfs3/frecord.c:3087
ntfs_rename+0x610/0xae0 fs/ntfs3/namei.c:334
vfs_rename+0x9bc/0xc84 fs/namei.c:4880
do_renameat2+0x9c8/0xe40 fs/namei.c:5037
__do_sys_renameat2 fs/namei.c:5071 [inline]
__se_sys_renameat2 fs/namei.c:5068 [inline]
__arm64_sys_renameat2+0xe0/0xfc fs/namei.c:5068
__invoke_syscall arch/arm64/kernel/syscall.c:34 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:48
el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:133
do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:152
el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:712
el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:730
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:598
-> #0 (&wnd->rw_lock/1){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x3384/0x763c kernel/locking/lockdep.c:5137
lock_acquire+0x248/0x73c kernel/locking/lockdep.c:5754
down_write_nested+0x58/0xcc kernel/locking/rwsem.c:1695
ntfs_mark_rec_free+0x48/0x270 fs/ntfs3/fsntfs.c:742
ni_write_inode+0xa28/0xda4 fs/ntfs3/frecord.c:3365
ntfs3_write_inode+0x70/0x98 fs/ntfs3/inode.c:1046
write_inode fs/fs-writeback.c:1498 [inline]
__writeback_single_inode+0x5f0/0x1548 fs/fs-writeback.c:1715
writeback_sb_inodes+0x700/0x101c fs/fs-writeback.c:1941
wb_writeback+0x404/0x1048 fs/fs-writeback.c:2117
wb_do_writeback fs/fs-writeback.c:2264 [inline]
wb_workfn+0x394/0x104c fs/fs-writeback.c:2304
process_one_work+0x7b8/0x15d4 kernel/workqueue.c:3267
process_scheduled_works kernel/workqueue.c:3348 [inline]
worker_thread+0x938/0xef4 kernel/workqueue.c:3429
kthread+0x288/0x310 kernel/kthread.c:388
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ni->ni_lock#3);
lock(&wnd->rw_lock/1);
lock(&ni->ni_lock#3);
lock(&wnd->rw_lock/1);
*** DEADLOCK ***
3 locks held by kworker/u8:7/652:
#0: ffff0000c20c6948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x668/0x15d4 kernel/workqueue.c:3241
#1: ffff800098d87c20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x6b4/0x15d4 kernel/workqueue.c:3241
#2: ffff0000decb6fa0 (&ni->ni_lock#3){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1143 [inline]
#2: ffff0000decb6fa0 (&ni->ni_lock#3){+.+.}-{3:3}, at: ni_write_inode+0x168/0xda4 fs/ntfs3/frecord.c:3265
stack backtrace:
CPU: 1 PID: 652 Comm: kworker/u8:7 Not tainted 6.9.0-rc7-syzkaller-gfda5695d692c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: writeback wb_workfn (flush-7:0)
Call trace:
dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:317
show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:324
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:114
dump_stack+0x1c/0x28 lib/dump_stack.c:123
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2060
check_noncircular+0x310/0x404 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x3384/0x763c kernel/locking/lockdep.c:5137
lock_acquire+0x248/0x73c kernel/locking/lockdep.c:5754
down_write_nested+0x58/0xcc kernel/locking/rwsem.c:1695
ntfs_mark_rec_free+0x48/0x270 fs/ntfs3/fsntfs.c:742
ni_write_inode+0xa28/0xda4 fs/ntfs3/frecord.c:3365
ntfs3_write_inode+0x70/0x98 fs/ntfs3/inode.c:1046
write_inode fs/fs-writeback.c:1498 [inline]
__writeback_single_inode+0x5f0/0x1548 fs/fs-writeback.c:1715
writeback_sb_inodes+0x700/0x101c fs/fs-writeback.c:1941
wb_writeback+0x404/0x1048 fs/fs-writeback.c:2117
wb_do_writeback fs/fs-writeback.c:2264 [inline]
wb_workfn+0x394/0x104c fs/fs-writeback.c:2304
process_one_work+0x7b8/0x15d4 kernel/workqueue.c:3267
process_scheduled_works kernel/workqueue.c:3348 [inline]
worker_thread+0x938/0xef4 kernel/workqueue.c:3429
kthread+0x288/0x310 kernel/kthread.c:388
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Powered by blists - more mailing lists