[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000000000001e3b8505e9f95e3d@google.com>
Date: Sat, 01 Oct 2022 06:48:38 -0700
From: syzbot <syzbot+8ef76b0b1f86c382ad37@...kaller.appspotmail.com>
To: anton@...era.com, linux-kernel@...r.kernel.org,
linux-ntfs-dev@...ts.sourceforge.net,
syzkaller-bugs@...glegroups.com
Subject: [syzbot] possible deadlock in ntfs_read_folio
Hello,
syzbot found the following issue on:
HEAD commit: 49c13ed0316d Merge tag 'soc-fixes-6.0-rc7' of git://git.ke..
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=104c0ce0880000
kernel config: https://syzkaller.appspot.com/x/.config?x=755695d26ad09807
dashboard link: https://syzkaller.appspot.com/bug?extid=8ef76b0b1f86c382ad37
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=165d4f6c880000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=113848f4880000
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+8ef76b0b1f86c382ad37@...kaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.0.0-rc7-syzkaller-00068-g49c13ed0316d #0 Not tainted
------------------------------------------------------
kworker/u4:5/1081 is trying to acquire lock:
ffff888075ab8940 (&rl->lock){++++}-{3:3}, at: ntfs_read_block fs/ntfs/aops.c:248 [inline]
ffff888075ab8940 (&rl->lock){++++}-{3:3}, at: ntfs_read_folio+0x1bd3/0x2e10 fs/ntfs/aops.c:436
but task is already holding lock:
ffff888075abb310 (&ni->mrec_lock){+.+.}-{3:3}, at: map_mft_record+0x3c/0x6b0 fs/ntfs/mft.c:154
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&ni->mrec_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x12f/0x1350 kernel/locking/mutex.c:747
map_mft_record+0x3c/0x6b0 fs/ntfs/mft.c:154
ntfs_map_runlist_nolock+0xb5a/0x16f0 fs/ntfs/attrib.c:91
ntfs_map_runlist+0x77/0xa0 fs/ntfs/attrib.c:292
ntfs_read_block fs/ntfs/aops.c:283 [inline]
ntfs_read_folio+0x1c2d/0x2e10 fs/ntfs/aops.c:436
read_pages+0xb5e/0xfc0 mm/readahead.c:178
page_cache_ra_unbounded+0x3f5/0x550 mm/readahead.c:263
do_page_cache_ra mm/readahead.c:293 [inline]
page_cache_ra_order+0x69a/0x970 mm/readahead.c:550
ondemand_readahead+0x6fc/0x1160 mm/readahead.c:672
page_cache_sync_ra+0x1c5/0x200 mm/readahead.c:699
page_cache_sync_readahead include/linux/pagemap.h:1215 [inline]
filemap_get_pages+0x2a1/0x1790 mm/filemap.c:2566
filemap_read+0x314/0xe10 mm/filemap.c:2660
generic_file_read_iter+0x3b0/0x5a0 mm/filemap.c:2806
__kernel_read+0x2c6/0x7c0 fs/read_write.c:428
integrity_kernel_read+0x7b/0xb0 security/integrity/iint.c:199
ima_calc_file_hash_tfm+0x2aa/0x3b0 security/integrity/ima/ima_crypto.c:485
ima_calc_file_shash security/integrity/ima/ima_crypto.c:516 [inline]
ima_calc_file_hash+0x191/0x4a0 security/integrity/ima/ima_crypto.c:573
ima_collect_measurement+0x5ca/0x710 security/integrity/ima/ima_api.c:292
process_measurement+0xd1e/0x18b0 security/integrity/ima/ima_main.c:337
ima_file_check+0xac/0x100 security/integrity/ima/ima_main.c:517
do_open fs/namei.c:3559 [inline]
path_openat+0x1611/0x28f0 fs/namei.c:3691
do_filp_open+0x1b6/0x400 fs/namei.c:3718
do_sys_openat2+0x16d/0x4c0 fs/open.c:1313
do_sys_open fs/open.c:1329 [inline]
__do_sys_openat fs/open.c:1345 [inline]
__se_sys_openat fs/open.c:1340 [inline]
__x64_sys_openat+0x13f/0x1f0 fs/open.c:1340
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (&rl->lock){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3095 [inline]
check_prevs_add kernel/locking/lockdep.c:3214 [inline]
validate_chain kernel/locking/lockdep.c:3829 [inline]
__lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5053
lock_acquire kernel/locking/lockdep.c:5666 [inline]
lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631
down_read+0x98/0x450 kernel/locking/rwsem.c:1499
ntfs_read_block fs/ntfs/aops.c:248 [inline]
ntfs_read_folio+0x1bd3/0x2e10 fs/ntfs/aops.c:436
filemap_read_folio+0x3c/0x1d0 mm/filemap.c:2394
do_read_cache_folio+0x1df/0x510 mm/filemap.c:3519
do_read_cache_page mm/filemap.c:3561 [inline]
read_cache_page+0x59/0x2b0 mm/filemap.c:3570
read_mapping_page include/linux/pagemap.h:756 [inline]
ntfs_map_page fs/ntfs/aops.h:75 [inline]
ntfs_sync_mft_mirror+0x24b/0x1ea0 fs/ntfs/mft.c:480
write_mft_record_nolock+0x198a/0x1cc0 fs/ntfs/mft.c:787
write_mft_record+0x14e/0x3b0 fs/ntfs/mft.h:95
__ntfs_write_inode+0x911/0xc40 fs/ntfs/inode.c:3043
write_inode fs/fs-writeback.c:1440 [inline]
__writeback_single_inode+0xb5c/0x10b0 fs/fs-writeback.c:1652
writeback_sb_inodes+0x54d/0xf10 fs/fs-writeback.c:1865
wb_writeback+0x294/0xc20 fs/fs-writeback.c:2039
wb_do_writeback fs/fs-writeback.c:2182 [inline]
wb_workfn+0x2a1/0x1170 fs/fs-writeback.c:2222
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ni->mrec_lock);
lock(&rl->lock);
lock(&ni->mrec_lock);
lock(&rl->lock);
*** DEADLOCK ***
3 locks held by kworker/u4:5/1081:
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
#0: ffff888144b0e138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc900045cfda8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2: ffff888075abb310 (&ni->mrec_lock){+.+.}-{3:3}, at: map_mft_record+0x3c/0x6b0 fs/ntfs/mft.c:154
stack backtrace:
CPU: 0 PID: 1081 Comm: kworker/u4:5 Not tainted 6.0.0-rc7-syzkaller-00068-g49c13ed0316d #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Workqueue: writeback wb_workfn (flush-7:0)
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3095 [inline]
check_prevs_add kernel/locking/lockdep.c:3214 [inline]
validate_chain kernel/locking/lockdep.c:3829 [inline]
__lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5053
lock_acquire kernel/locking/lockdep.c:5666 [inline]
lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631
down_read+0x98/0x450 kernel/locking/rwsem.c:1499
ntfs_read_block fs/ntfs/aops.c:248 [inline]
ntfs_read_folio+0x1bd3/0x2e10 fs/ntfs/aops.c:436
filemap_read_folio+0x3c/0x1d0 mm/filemap.c:2394
do_read_cache_folio+0x1df/0x510 mm/filemap.c:3519
do_read_cache_page mm/filemap.c:3561 [inline]
read_cache_page+0x59/0x2b0 mm/filemap.c:3570
read_mapping_page include/linux/pagemap.h:756 [inline]
ntfs_map_page fs/ntfs/aops.h:75 [inline]
ntfs_sync_mft_mirror+0x24b/0x1ea0 fs/ntfs/mft.c:480
write_mft_record_nolock+0x198a/0x1cc0 fs/ntfs/mft.c:787
write_mft_record+0x14e/0x3b0 fs/ntfs/mft.h:95
__ntfs_write_inode+0x911/0xc40 fs/ntfs/inode.c:3043
write_inode fs/fs-writeback.c:1440 [inline]
__writeback_single_inode+0xb5c/0x10b0 fs/fs-writeback.c:1652
writeback_sb_inodes+0x54d/0xf10 fs/fs-writeback.c:1865
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@...glegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches
Powered by blists - more mailing lists