[<prev] [next>] [day] [month] [year] [list]
Message-ID: <519793c6.a99e.1966d93f534.Coremail.luckd0g@163.com>
Date: Fri, 25 Apr 2025 23:32:15 +0800 (CST)
From: "Jianzhou Zhao" <luckd0g@....com>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, jack@...e.com
Subject: Potential Linux Crash: possible deadlock in udf_page_mkwrite in
linux6.12.24(longterm maintenance)
Hello, I found a potential bug titled " possible deadlock in udf_page_mkwrite " with modified syzkaller in the Linux6.12.24(longterm maintenance, last updated on April 20, 2025).
Unfortunately, I am unable to reproduce this bug.
If you fix this issue, please add the following tag to the commit: Reported-by: Jianzhou Zhao <luckd0g@....com>, xingwei lee <xrivendell7@...il.com>,Penglei Jiang <superman.xpt@...il.com>
The commit of the kernel is : b6efa8ce222e58cfe2bbaa4e3329818c2b4bd74e
kernel config: https://syzkaller.appspot.com/text?tag=KernelConfig&x=55f8591b98dd132
compiler: gcc version 11.4.0
------------[ cut here ]-----------------------------------------
TITLE: possible deadlock in udf_page_mkwrite
------------[ cut here ]------------
======================================================
WARNING: possible circular locking dependency detected
6.12.24 #3 Not tainted
------------------------------------------------------
syz.6.870/18445 is trying to acquire lock:
ffff888053607700 (mapping.invalidate_lock#4){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:870 [inline]
ffff888053607700 (mapping.invalidate_lock#4){++++}-{3:3}, at: udf_page_mkwrite+0x2af/0xa20 fs/udf/file.c:50
but task is already holding lock:
ffff888044124518 (sb_pagefaults#2){.+.+}-{0:0}, at: do_page_mkwrite+0x17d/0x390 mm/memory.c:3161
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #9 (sb_pagefaults#2){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1716 [inline]
sb_start_pagefault include/linux/fs.h:1881 [inline]
udf_page_mkwrite+0x18b/0xa20 fs/udf/file.c:48
do_page_mkwrite+0x17d/0x390 mm/memory.c:3161
wp_page_shared mm/memory.c:3562 [inline]
do_wp_page+0x1291/0x4860 mm/memory.c:3712
handle_pte_fault mm/memory.c:5786 [inline]
__handle_mm_fault+0x150c/0x2a20 mm/memory.c:5913
handle_mm_fault+0x404/0xab0 mm/memory.c:6081
do_user_addr_fault+0x61b/0x13a0 arch/x86/mm/fault.c:1338
handle_page_fault arch/x86/mm/fault.c:1481 [inline]
exc_page_fault+0x98/0x180 arch/x86/mm/fault.c:1539
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
-> #8 (&vma->vm_lock->lock){++++}-{3:3}:
down_write+0x92/0x200 kernel/locking/rwsem.c:1577
vma_start_write include/linux/mm.h:757 [inline]
vma_link+0x268/0x490 mm/vma.c:1604
insert_vm_struct+0x197/0x3f0 mm/mmap.c:2011
__bprm_mm_init fs/exec.c:289 [inline]
bprm_mm_init fs/exec.c:393 [inline]
alloc_bprm+0x6d1/0xd30 fs/exec.c:1578
kernel_execve+0xb0/0x3d0 fs/exec.c:2010
try_to_run_init_process init/main.c:1397 [inline]
kernel_init+0x154/0x2d0 init/main.c:1525
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
-> #7 (&mm->mmap_lock){++++}-{3:3}:
__might_fault mm/memory.c:6719 [inline]
__might_fault+0x118/0x190 mm/memory.c:6713
_inline_copy_from_user include/linux/uaccess.h:162 [inline]
_copy_from_user+0x2b/0xd0 lib/usercopy.c:18
copy_from_user include/linux/uaccess.h:212 [inline]
__blk_trace_setup+0x96/0x180 kernel/trace/blktrace.c:626
blk_trace_setup+0x47/0x70 kernel/trace/blktrace.c:648
sg_ioctl_common drivers/scsi/sg.c:1114 [inline]
sg_ioctl+0x65e/0x2720 drivers/scsi/sg.c:1156
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:907 [inline]
__se_sys_ioctl fs/ioctl.c:893 [inline]
__x64_sys_ioctl+0x19d/0x210 fs/ioctl.c:893
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcb/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #6 (&q->debugfs_mutex){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x147/0x930 kernel/locking/mutex.c:752
blk_mq_init_sched+0x436/0x650 block/blk-mq-sched.c:473
elevator_init_mq+0x2cc/0x420 block/elevator.c:610
device_add_disk+0x10e/0x12f0 block/genhd.c:411
sd_probe+0xa0e/0xf80 drivers/scsi/sd.c:4024
call_driver_probe drivers/base/dd.c:579 [inline]
really_probe+0x24f/0xa90 drivers/base/dd.c:658
__driver_probe_device+0x1df/0x450 drivers/base/dd.c:800
driver_probe_device+0x4c/0x1a0 drivers/base/dd.c:830
__device_attach_driver+0x1db/0x2f0 drivers/base/dd.c:958
bus_for_each_drv+0x149/0x1d0 drivers/base/bus.c:459
__device_attach_async_helper+0x1d1/0x290 drivers/base/dd.c:987
async_run_entry_fn+0x9c/0x530 kernel/async.c:129
process_one_work+0xa02/0x1bf0 kernel/workqueue.c:3232
process_scheduled_works kernel/workqueue.c:3314 [inline]
worker_thread+0x677/0xe90 kernel/workqueue.c:3395
kthread+0x2c7/0x3b0 kernel/kthread.c:389
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
-> #5 (&q->q_usage_counter(queue)#51){++++}-{0:0}:
blk_queue_enter+0x4d0/0x600 block/blk-core.c:328
blk_mq_alloc_request+0x422/0x9c0 block/blk-mq.c:680
scsi_alloc_request drivers/scsi/scsi_lib.c:1227 [inline]
scsi_execute_cmd+0x1fe/0xf20 drivers/scsi/scsi_lib.c:304
read_capacity_16+0x1f2/0xe60 drivers/scsi/sd.c:2655
sd_read_capacity drivers/scsi/sd.c:2824 [inline]
sd_revalidate_disk.isra.0+0x1989/0xa440 drivers/scsi/sd.c:3734
sd_probe+0x887/0xf80 drivers/scsi/sd.c:4010
call_driver_probe drivers/base/dd.c:579 [inline]
really_probe+0x24f/0xa90 drivers/base/dd.c:658
__driver_probe_device+0x1df/0x450 drivers/base/dd.c:800
driver_probe_device+0x4c/0x1a0 drivers/base/dd.c:830
__device_attach_driver+0x1db/0x2f0 drivers/base/dd.c:958
bus_for_each_drv+0x149/0x1d0 drivers/base/bus.c:459
__device_attach_async_helper+0x1d1/0x290 drivers/base/dd.c:987
async_run_entry_fn+0x9c/0x530 kernel/async.c:129
process_one_work+0xa02/0x1bf0 kernel/workqueue.c:3232
process_scheduled_works kernel/workqueue.c:3314 [inline]
worker_thread+0x677/0xe90 kernel/workqueue.c:3395
kthread+0x2c7/0x3b0 kernel/kthread.c:389
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
-> #4 (&q->limits_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x147/0x930 kernel/locking/mutex.c:752
queue_limits_start_update include/linux/blkdev.h:935 [inline]
loop_reconfigure_limits+0x1f2/0x960 drivers/block/loop.c:1004
loop_set_block_size drivers/block/loop.c:1474 [inline]
lo_simple_ioctl drivers/block/loop.c:1497 [inline]
lo_ioctl+0xb92/0x1870 drivers/block/loop.c:1560
blkdev_ioctl+0x27b/0x6c0 block/ioctl.c:693
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:907 [inline]
__se_sys_ioctl fs/ioctl.c:893 [inline]
__x64_sys_ioctl+0x19d/0x210 fs/ioctl.c:893
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcb/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #3 (&q->q_usage_counter(io)#19){++++}-{0:0}:
bio_queue_enter block/blk.h:76 [inline]
blk_mq_submit_bio+0x2167/0x2b70 block/blk-mq.c:3090
__submit_bio+0x399/0x630 block/blk-core.c:629
__submit_bio_noacct_mq block/blk-core.c:716 [inline]
submit_bio_noacct_nocheck+0x6c3/0xd00 block/blk-core.c:745
submit_bio_noacct+0x57a/0x1fa0 block/blk-core.c:868
submit_bh fs/buffer.c:2824 [inline]
__bread_slow fs/buffer.c:1270 [inline]
__bread_gfp+0x189/0x340 fs/buffer.c:1494
sb_bread include/linux/buffer_head.h:346 [inline]
read_block_bitmap fs/udf/balloc.c:43 [inline]
load_block_bitmap+0x1ff/0x570 fs/udf/balloc.c:99
udf_bitmap_free_blocks fs/udf/balloc.c:150 [inline]
udf_free_blocks+0x57c/0x10a0 fs/udf/balloc.c:674
udf_evict_inode+0x34c/0x590 fs/udf/inode.c:163
evict+0x3ef/0x940 fs/inode.c:725
iput_final fs/inode.c:1877 [inline]
iput fs/inode.c:1903 [inline]
iput+0x511/0x7f0 fs/inode.c:1889
do_unlinkat+0x58f/0x710 fs/namei.c:4540
__do_sys_unlink fs/namei.c:4581 [inline]
__se_sys_unlink fs/namei.c:4579 [inline]
__x64_sys_unlink+0xc7/0x110 fs/namei.c:4579
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcb/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #2 (&sbi->s_alloc_mutex){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x147/0x930 kernel/locking/mutex.c:752
udf_bitmap_new_block fs/udf/balloc.c:236 [inline]
udf_new_block+0x7e2/0x1750 fs/udf/balloc.c:721
inode_getblk+0xe88/0x3bb0 fs/udf/inode.c:895
udf_map_block+0x2e3/0x5a0 fs/udf/inode.c:447
__udf_get_block+0x9c/0x340 fs/udf/inode.c:461
__block_write_begin_int+0x4e5/0x1670 fs/buffer.c:2121
block_write_begin+0x9a/0x1d0 fs/buffer.c:2231
udf_write_begin+0x1bc/0x280 fs/udf/inode.c:256
generic_perform_write+0x2bd/0x900 mm/filemap.c:4065
__generic_file_write_iter+0x1f6/0x240 mm/filemap.c:4166
udf_file_write_iter+0x233/0x740 fs/udf/file.c:111
new_sync_write fs/read_write.c:590 [inline]
vfs_write+0xbfc/0x10d0 fs/read_write.c:683
ksys_write+0x122/0x250 fs/read_write.c:736
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcb/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&ei->i_data_sem#2){++++}-{3:3}:
down_write+0x92/0x200 kernel/locking/rwsem.c:1577
udf_expand_file_adinicb+0x174/0x970 fs/udf/inode.c:363
udf_setsize+0x4b2/0x10f0 fs/udf/inode.c:1289
udf_setattr+0x523/0x6c0 fs/udf/file.c:236
notify_change+0x6d3/0x1280 fs/attr.c:503
do_truncate+0x143/0x200 fs/open.c:65
vfs_truncate+0x3e3/0x4c0 fs/open.c:111
do_sys_truncate.part.0+0xf7/0x140 fs/open.c:134
do_sys_truncate fs/open.c:128 [inline]
__do_sys_truncate fs/open.c:146 [inline]
__se_sys_truncate fs/open.c:144 [inline]
__x64_sys_truncate+0x6d/0xa0 fs/open.c:144
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcb/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (mapping.invalidate_lock#4){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain kernel/locking/lockdep.c:3904 [inline]
__lock_acquire+0x2425/0x3b90 kernel/locking/lockdep.c:5202
lock_acquire.part.0+0x11b/0x370 kernel/locking/lockdep.c:5825
down_read+0x9a/0x330 kernel/locking/rwsem.c:1524
filemap_invalidate_lock_shared include/linux/fs.h:870 [inline]
udf_page_mkwrite+0x2af/0xa20 fs/udf/file.c:50
do_page_mkwrite+0x17d/0x390 mm/memory.c:3161
wp_page_shared mm/memory.c:3562 [inline]
do_wp_page+0x1291/0x4860 mm/memory.c:3712
handle_pte_fault mm/memory.c:5786 [inline]
__handle_mm_fault+0x150c/0x2a20 mm/memory.c:5913
handle_mm_fault+0x404/0xab0 mm/memory.c:6081
do_user_addr_fault+0x61b/0x13a0 arch/x86/mm/fault.c:1338
handle_page_fault arch/x86/mm/fault.c:1481 [inline]
exc_page_fault+0x98/0x180 arch/x86/mm/fault.c:1539
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
other info that might help us debug this:
Chain exists of:
mapping.invalidate_lock#4 --> &vma->vm_lock->lock --> sb_pagefaults#2
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(sb_pagefaults#2);
lock(&vma->vm_lock->lock);
lock(sb_pagefaults#2);
rlock(mapping.invalidate_lock#4);
*** DEADLOCK ***
2 locks held by syz.6.870/18445:
#0: ffff888050dbed18 (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_read include/linux/mm.h:704 [inline]
#0: ffff888050dbed18 (&vma->vm_lock->lock){++++}-{3:3}, at: lock_vma_under_rcu+0x141/0x9a0 mm/memory.c:6247
#1: ffff888044124518 (sb_pagefaults#2){.+.+}-{0:0}, at: do_page_mkwrite+0x17d/0x390 mm/memory.c:3161
stack backtrace:
CPU: 0 UID: 0 PID: 18445 Comm: syz.6.870 Not tainted 6.12.24 #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x116/0x1b0 lib/dump_stack.c:120
print_circular_bug+0x406/0x5c0 kernel/locking/lockdep.c:2074
check_noncircular+0x2f7/0x3e0 kernel/locking/lockdep.c:2206
check_prev_add kernel/locking/lockdep.c:3161 [inline]
check_prevs_add kernel/locking/lockdep.c:3280 [inline]
validate_chain kernel/locking/lockdep.c:3904 [inline]
__lock_acquire+0x2425/0x3b90 kernel/locking/lockdep.c:5202
lock_acquire.part.0+0x11b/0x370 kernel/locking/lockdep.c:5825
down_read+0x9a/0x330 kernel/locking/rwsem.c:1524
filemap_invalidate_lock_shared include/linux/fs.h:870 [inline]
udf_page_mkwrite+0x2af/0xa20 fs/udf/file.c:50
do_page_mkwrite+0x17d/0x390 mm/memory.c:3161
wp_page_shared mm/memory.c:3562 [inline]
do_wp_page+0x1291/0x4860 mm/memory.c:3712
handle_pte_fault mm/memory.c:5786 [inline]
__handle_mm_fault+0x150c/0x2a20 mm/memory.c:5913
handle_mm_fault+0x404/0xab0 mm/memory.c:6081
do_user_addr_fault+0x61b/0x13a0 arch/x86/mm/fault.c:1338
handle_page_fault arch/x86/mm/fault.c:1481 [inline]
exc_page_fault+0x98/0x180 arch/x86/mm/fault.c:1539
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7fc9b0c4f002
Code: f0 48 8b 54 24 f0 8b 8a 80 00 00 00 8b 82 04 01 00 00 21 c8 83 c1 01 c1 e0 04 48 8b b4 02 40 01 00 00 48 8b 84 02 48 01 00 00 <89> 8a 80 00 00 00 48 81 fe 45 23 01 00 74 09 48 81 fe 56 34 02 00
RSP: 002b:00007fc9b1c9af88 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00007fc9b0fe6080 RCX: 0000000000000001
RDX: 0000200000ff9000 RSI: 0000000000000000 RDI: 0000200000ff9000
RBP: 00007fc9b0e46d56 R08: 0000000000000000 R09: 0000000000000000
R10: 0000200000ff9000 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc9b0fe6080 R15: 00007fc9b1c7b000
</TASK>
==================================================================
I hope it helps.
Best regards
Jianzhou Zhao
Powered by blists - more mailing lists