[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5601285f-6628-f121-243e-44de7b15c779@kernel.dk>
Date: Thu, 12 Jan 2023 08:27:48 -0700
From: Jens Axboe <axboe@...nel.dk>
To: syzbot <syzbot+6805087452d72929404e@...kaller.appspotmail.com>,
asml.silence@...il.com, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] WARNING in io_cqring_event_overflow
On 1/12/23 3:56?AM, syzbot wrote:
> Hello,
>
> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> WARNING in io_cqring_event_overflow
>
> ------------[ cut here ]------------
> WARNING: CPU: 1 PID: 2836 at io_uring/io_uring.c:734 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
> Modules linked in:
> CPU: 1 PID: 2836 Comm: kworker/u4:4 Not tainted 6.2.0-rc3-syzkaller-00011-g0af4af977a59 #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
> Workqueue: events_unbound io_ring_exit_work
> pstate: 80400005 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> pc : io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
> lr : io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
> sp : ffff8000164abad0
> x29: ffff8000164abad0
> x28: ffff0000c655e578
> x27: ffff80000d49b000
>
> x26: 0000000000000000
> x25: 0000000000000000
> x24: 0000000000000000
>
> x23: 0000000000000000
> x22: 0000000000000000
> x21: 0000000000000000
>
> x20: 0000000000000000
> x19: ffff0000d1727000
> x18: 00000000000000c0
>
> x17: ffff80000df48158
> x16: ffff80000dd86118
> x15: ffff0000c60dce00
>
> x14: 0000000000000110
> x13: 00000000ffffffff
> x12: ffff0000c60dce00
>
> x11: ff808000095945e8
> x10: 0000000000000000
> x9 : ffff8000095945e8
>
> x8 : ffff0000c60dce00
> x7 : ffff80000c1090e0
> x6 : 0000000000000000
>
> x5 : 0000000000000000
> x4 : 0000000000000000
> x3 : 0000000000000000
>
> x2 : 0000000000000000
> x1 : 0000000000000000
> x0 : 0000000000000000
>
> Call trace:
> io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
> io_req_cqe_overflow+0x5c/0x70 io_uring/io_uring.c:773
> io_fill_cqe_req io_uring/io_uring.h:168 [inline]
> io_do_iopoll+0x474/0x62c io_uring/rw.c:1065
> io_iopoll_try_reap_events+0x6c/0x108 io_uring/io_uring.c:1513
> io_uring_try_cancel_requests+0x13c/0x258 io_uring/io_uring.c:3056
> io_ring_exit_work+0xec/0x390 io_uring/io_uring.c:2869
> process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
> worker_thread+0x340/0x610 kernel/workqueue.c:2436
> kthread+0x12c/0x158 kernel/kthread.c:376
> ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:863
> irq event stamp: 576210
> hardirqs last enabled at (576209): [<ffff80000c1238f8>] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:159 [inline]
> hardirqs last enabled at (576209): [<ffff80000c1238f8>] _raw_spin_unlock_irq+0x3c/0x70 kernel/locking/spinlock.c:202
> hardirqs last disabled at (576210): [<ffff80000c110630>] el1_dbg+0x24/0x80 arch/arm64/kernel/entry-common.c:405
> softirqs last enabled at (576168): [<ffff80000bfd4634>] spin_unlock_bh include/linux/spinlock.h:395 [inline]
> softirqs last enabled at (576168): [<ffff80000bfd4634>] batadv_nc_purge_paths+0x1d0/0x214 net/batman-adv/network-coding.c:471
> softirqs last disabled at (576166): [<ffff80000bfd44c4>] spin_lock_bh include/linux/spinlock.h:355 [inline]
> softirqs last disabled at (576166): [<ffff80000bfd44c4>] batadv_nc_purge_paths+0x60/0x214 net/batman-adv/network-coding.c:442
> ---[ end trace 0000000000000000 ]---
> ------------[ cut here ]------------
Pavel, don't we want to make this follow the usual lockdep rules?
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 1dd0fc0412c8..5aab3fa3b7c5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -753,7 +753,7 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
size_t ocq_size = sizeof(struct io_overflow_cqe);
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
- lockdep_assert_held(&ctx->completion_lock);
+ io_lockdep_assert_cq_locked(ctx);
if (is_cqe32)
ocq_size += sizeof(struct io_uring_cqe);
--
Jens Axboe
Powered by blists - more mailing lists