[<prev] [next>] [day] [month] [year] [list]
Message-ID: <67789c12.050a0220.3b53b0.0043.GAE@google.com>
Date: Fri, 03 Jan 2025 18:25:22 -0800
From: syzbot <syzbot+3d1442173e1be9889481@...kaller.appspotmail.com>
To: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
syzkaller-bugs@...glegroups.com
Subject: [syzbot] [hfs?] WARNING in check_flush_dependency (3)
Hello,
syzbot found the following issue on:
HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=11df6ac4580000
kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88
dashboard link: https://syzkaller.appspot.com/bug?extid=3d1442173e1be9889481
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz
kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3d1442173e1be9889481@...kaller.appspotmail.com
------------[ cut here ]------------
workqueue: WQ_MEM_RECLAIM dio/loop6:dio_aio_complete_work is flushing !WQ_MEM_RECLAIM events_long:flush_mdb
WARNING: CPU: 1 PID: 51 at kernel/workqueue.c:3712 check_flush_dependency+0x329/0x3c0 kernel/workqueue.c:3708
Modules linked in:
CPU: 1 UID: 0 PID: 51 Comm: kworker/1:1 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: dio/loop6 dio_aio_complete_work
RIP: 0010:check_flush_dependency+0x329/0x3c0 kernel/workqueue.c:3708
Code: 08 4c 89 f7 e8 e8 f6 9d 00 49 8b 16 49 81 c4 78 01 00 00 48 c7 c7 20 d6 09 8c 4c 89 ee 4c 89 e1 4c 8b 04 24 e8 38 3f f8 ff 90 <0f> 0b 90 90 e9 4a ff ff ff 89 d9 80 e1 07 80 c1 03 38 c1 0f 8c 09
RSP: 0018:ffffc90000bb77c0 EFLAGS: 00010046
RAX: 5c710a1b8e2d3500 RBX: ffff88802f455808 RCX: ffff88801e6a5a00
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000008 R08: ffffffff817feaa2 R09: 1ffffffff1d026d4
R10: dffffc0000000000 R11: fffffbfff1d026d5 R12: ffff88801ac81578
R13: ffff88807c52d178 R14: ffff888020699018 R15: ffff888020699020
FS: 0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000005632e000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
start_flush_work kernel/workqueue.c:4162 [inline]
__flush_work+0x286/0xc60 kernel/workqueue.c:4199
flush_work kernel/workqueue.c:4256 [inline]
flush_delayed_work+0x169/0x1c0 kernel/workqueue.c:4278
hfs_file_fsync+0xea/0x140 fs/hfs/inode.c:680
generic_write_sync include/linux/fs.h:2952 [inline]
dio_complete+0x55c/0x6b0 fs/direct-io.c:313
process_one_work kernel/workqueue.c:3229 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
worker_thread+0x870/0xd30 kernel/workqueue.c:3391
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@...glegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
Powered by blists - more mailing lists