[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5065512b-a128-939c-8ca3-d8198f768859@kernel.dk>
Date: Mon, 29 Mar 2021 07:30:17 -0600
From: Jens Axboe <axboe@...nel.dk>
To: syzbot <syzbot+796d767eb376810256f5@...kaller.appspotmail.com>,
asml.silence@...il.com, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] WARNING: still has locks held in io_sq_thread
On 3/29/21 7:29 AM, syzbot wrote:
> Hello,
>
> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> WARNING in kvm_wait
>
> ------------[ cut here ]------------
> raw_local_irq_restore() called with IRQs enabled
> WARNING: CPU: 1 PID: 5134 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10
> Modules linked in:
> CPU: 1 PID: 5134 Comm: syz-executor.2 Not tainted 5.12.0-rc2-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> RIP: 0010:warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10
> Code: bf ff cc cc cc cc cc cc cc cc cc cc cc 80 3d 65 c2 0f 04 00 74 01 c3 48 c7 c7 a0 7b 6b 89 c6 05 54 c2 0f 04 01 e8 65 19 bf ff <0f> 0b c3 48 39 77 10 0f 84 97 00 00 00 66 f7 47 22 f0 ff 74 4b 48
> RSP: 0018:ffffc90002f5f9c0 EFLAGS: 00010286
> RAX: 0000000000000000 RBX: ffff888023a7d040 RCX: 0000000000000000
> RDX: ffff88801bbcc2c0 RSI: ffffffff815b7375 RDI: fffff520005ebf2a
> RBP: 0000000000000200 R08: 0000000000000000 R09: 0000000000000000
> R10: ffffffff815b00de R11: 0000000000000000 R12: 0000000000000003
> R13: ffffed100474fa08 R14: 0000000000000001 R15: ffff8880b9f36000
> FS: 000000000293e400(0000) GS:ffff8880b9f00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007ffd20e04f88 CR3: 00000000116b8000 CR4: 00000000001506e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> kvm_wait arch/x86/kernel/kvm.c:860 [inline]
> kvm_wait+0xc9/0xe0 arch/x86/kernel/kvm.c:837
> pv_wait arch/x86/include/asm/paravirt.h:564 [inline]
> pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline]
> __pv_queued_spin_lock_slowpath+0x8b8/0xb40 kernel/locking/qspinlock.c:508
> pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:554 [inline]
> queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
> queued_spin_lock include/asm-generic/qspinlock.h:85 [inline]
> do_raw_spin_lock+0x200/0x2b0 kernel/locking/spinlock_debug.c:113
> spin_lock include/linux/spinlock.h:354 [inline]
> ext4_lock_group fs/ext4/ext4.h:3383 [inline]
> __ext4_new_inode+0x384f/0x5570 fs/ext4/ialloc.c:1188
> ext4_symlink+0x489/0xd50 fs/ext4/namei.c:3347
> vfs_symlink fs/namei.c:4176 [inline]
> vfs_symlink+0x10f/0x270 fs/namei.c:4161
> do_symlinkat+0x27a/0x300 fs/namei.c:4206
> do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
> entry_SYSCALL_64_after_hwframe+0x44/0xae
Same one that keeps happening, it's not related.
--
Jens Axboe
Powered by blists - more mailing lists