[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1845977.e0hk0VWMCB@cherry>
Date: Fri, 05 Apr 2024 21:40:14 +0300
From: Artem Sadovnikov <ancowi69@...il.com>
To: Mikhail Ukhin <mish.uxin2012@...dex.ru>, Theodore Ts'o <tytso@....edu>
Cc: stable@...r.kernel.org, Andreas Dilger <adilger.kernel@...ger.ca>,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
Michail Ivanov <iwanov-23@...ru>, Pavel Koshutin <koshutin.pavel@...dex.ru>,
lvc-project@...uxtesting.org
Subject: Re: [PATCH] ext4: fix i_data_sem unlock order in ext4_ind_migrate()
On Friday, April 5, 2024 5:26:51 AM MSK Theodore Ts'o wrote:
>
> This doesn't make any sense. Lock order matters; the order in which
> you unlock shouldn't (and doesn't) make a difference. This is also
> something which lockdep doesn't complain about --- because it's not a
> problem.
>
> So how was this "found by syzkaller"?
>
> - Ted
This issue only occurs when CONFIG_PROVE_LOCKING is set, in which case
jbd2_might_wait_for_commit macro will lock jbd2_handle in jbd2_journal_stop
function, which, while i_data_sem is locked, will trigger lockdep, because
jbd2_journal_start function might also lock on same jbd2_handle at the same
time
Without CONFIG_PROVE_LOCKING this issue is not possible because
jbd2_journal_stop never calls jbd2_might_wait_for_commit
Since syzkaller report was local and wasn't inserted in initial patch message,
I will put it in this message
It wasn't been able to create a reproducer for that problem, so there's only a
crash report
======================================================
WARNING: possible circular locking dependency detected
5.10.208-syzkaller #0 Not tainted
------------------------------------------------------
syz-fuzzer/1080 is trying to acquire lock:
ffff88810b09e8e0 (jbd2_handle){++++}-{0:0}, at: jbd2_log_wait_commit+0x142/0x430
fs/jbd2/journal.c:693
but task is already holding lock:
ffff88805eec5c78 (&ei->i_data_sem){++++}-{3:3}, at: ext4_ind_migrate+0x2fe/0x840
fs/ext4/migrate.c:633
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&ei->i_data_sem){++++}-{3:3}:
down_read+0x96/0x420 kernel/locking/rwsem.c:1504
ext4_da_map_blocks fs/ext4/inode.c:1776 [inline]
ext4_da_get_block_prep+0x5b4/0x11a0 fs/ext4/inode.c:1857
ext4_block_write_begin+0x479/0x1230 fs/ext4/inode.c:1076
ext4_da_write_begin+0x3ca/0x1060 fs/ext4/inode.c:3063
generic_perform_write+0x210/0x500 mm/filemap.c:3336
ext4_buffered_write_iter+0x232/0x4a0 fs/ext4/file.c:269
ext4_file_write_iter+0x429/0x1420 fs/ext4/file.c:683
call_write_iter include/linux/fs.h:1962 [inline]
new_sync_write+0x427/0x660 fs/read_write.c:518
vfs_write+0x774/0xab0 fs/read_write.c:605
ksys_write+0x12d/0x260 fs/read_write.c:658
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x62/0xc7
-> #0 (jbd2_handle){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:2988 [inline]
check_prevs_add kernel/locking/lockdep.c:3113 [inline]
validate_chain kernel/locking/lockdep.c:3729 [inline]
__lock_acquire+0x29c4/0x53c0 kernel/locking/lockdep.c:4955
lock_acquire kernel/locking/lockdep.c:5566 [inline]
lock_acquire+0x19f/0x4a0 kernel/locking/lockdep.c:5531
jbd2_log_wait_commit+0x177/0x430 fs/jbd2/journal.c:696
jbd2_journal_stop+0x5b0/0xde0 fs/jbd2/transaction.c:1933
__ext4_journal_stop+0xde/0x1f0 fs/ext4/ext4_jbd2.c:127
ext4_ind_migrate+0x402/0x840 fs/ext4/migrate.c:666
ext4_ioctl_setflags+0xaef/0xc70 fs/ext4/ioctl.c:459
__ext4_ioctl+0x3742/0x4e20 fs/ext4/ioctl.c:870
vfs_ioctl fs/ioctl.c:48 [inline]
__do_sys_ioctl fs/ioctl.c:753 [inline]
__se_sys_ioctl fs/ioctl.c:739 [inline]
__x64_sys_ioctl+0x19a/0x210 fs/ioctl.c:739
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x62/0xc7
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ei->i_data_sem);
lock(jbd2_handle);
lock(&ei->i_data_sem);
lock(jbd2_handle);
*** DEADLOCK ***
4 locks held by syz-fuzzer/1080:
#0: ffff88810bc42460 (sb_writers#4){.+.+}-{0:0}, at: __ext4_ioctl+0x76a/0x4e20
fs/ext4/ioctl.c:861
#1: ffff88805eec5e88 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: inode_lock
include/linux/fs.h:774 [inline]
#1: ffff88805eec5e88 (&type->i_mutex_dir_key#3){++++}-{3:3}, at:
__ext4_ioctl+0x799/0x4e20 fs/ext4/ioctl.c:865
#2: ffff88810bc44ac0 (&sbi->s_writepages_rwsem){++++}-{0:0}, at:
ext4_ind_migrate+0x237/0x840 fs/ext4/migrate.c:625
#3: ffff88805eec5c78 (&ei->i_data_sem){++++}-{3:3}, at: ext4_ind_migrate+0x2fe/
0x840 fs/ext4/migrate.c:633
stack backtrace:
CPU: 0 PID: 1080 Comm: syz-fuzzer Not tainted 5.10.208-syzkaller #0
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1
04/01/2014
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x107/0x167 lib/dump_stack.c:118
check_noncircular+0x26c/0x320 kernel/locking/lockdep.c:2123
check_prev_add kernel/locking/lockdep.c:2988 [inline]
check_prevs_add kernel/locking/lockdep.c:3113 [inline]
validate_chain kernel/locking/lockdep.c:3729 [inline]
__lock_acquire+0x29c4/0x53c0 kernel/locking/lockdep.c:4955
lock_acquire kernel/locking/lockdep.c:5566 [inline]
lock_acquire+0x19f/0x4a0 kernel/locking/lockdep.c:5531
jbd2_log_wait_commit+0x177/0x430 fs/jbd2/journal.c:696
jbd2_journal_stop+0x5b0/0xde0 fs/jbd2/transaction.c:1933
__ext4_journal_stop+0xde/0x1f0 fs/ext4/ext4_jbd2.c:127
ext4_ind_migrate+0x402/0x840 fs/ext4/migrate.c:666
ext4_ioctl_setflags+0xaef/0xc70 fs/ext4/ioctl.c:459
__ext4_ioctl+0x3742/0x4e20 fs/ext4/ioctl.c:870
vfs_ioctl fs/ioctl.c:48 [inline]
__do_sys_ioctl fs/ioctl.c:753 [inline]
__se_sys_ioctl fs/ioctl.c:739 [inline]
__x64_sys_ioctl+0x19a/0x210 fs/ioctl.c:739
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x62/0xc7
RIP: 0033:0x49e0bb
Code: e8 0a 4b fc ff eb 88 cc cc cc cc cc cc cc cc e8 1b 8f fc ff 48 8b 7c 24 10
48 8b 74 24 18 48 8b 54 24 20 48 8b 44 24 08 0f 05 <48> 3d 01 f0 ff ff 76 20 48
c7 44 24 28 ff ff ff ff 48 c7 44 24 30
RSP: 002b:000000c002b65ae8 EFLAGS: 00000212 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 000000c000025000 RCX: 000000000049e0bb
RDX: 000000c002b65b64 RSI: 0000000040086602 RDI: 000000000000001b
RBP: 000000c002b65b28 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000212 R12: 000000c02b38c840
R13: 0000000000000001 R14: 000000c018a321a0 R15: ffffffffffffffff
Powered by blists - more mailing lists