[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8431d920-ab84-447d-84fc-eb7904b1c733@gmail.com>
Date: Wed, 12 Jun 2024 14:52:34 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: chase xd <sl1589472800@...il.com>, Jens Axboe <axboe@...nel.dk>,
io-uring@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [io-uring] WARNING in io_fill_cqe_req_aux
On 6/12/24 13:35, Pavel Begunkov wrote:
> On 6/12/24 08:10, chase xd wrote:
>> Sorry now I'm also a bit confused by the branch choosing. I checked
>> out branch "for-6.9/io_uring" and started testing on that branch. I
>> assume that was the latest version of io_uring at that time, even now
>> I check out that branch and the bug still exists. How should I know
>> whether the branch will be merged, and which branch do you think I
>> should test on? Thanks.
>
> # git show a69d20885494:io_uring/io_uring.c | grep -A 13 io_fill_cqe_req_aux
> bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> u64 user_data = req->cqe.user_data;
>
> if (!defer)
> return __io_post_aux_cqe(ctx, user_data, res, cflags, false);
>
> lockdep_assert_held(&ctx->uring_lock);
> io_lockdep_assert_cq_locked(ctx);
>
> ctx->submit_state.flush_cqes = true;
> return io_fill_cqe_aux(ctx, user_data, res, cflags);
> }
>
> That's the buggy version from the hash you're testing, IIRC it
> was in the tree for longer than necessary, presumably which is
> why you found it, but it was never sent to Linus. Below is
> current state of for-6.9 and what it was replaced with
> respectively. Let me separately check for-6.9/io_uring if you're
> concerned about it.
In other words, it happens that bugs appear in the branches
but get rooted out before it gets anywhere. The main confusion
is that the version you're looking at was fixed up back somewhere
in March. That's fine, I'd just recommend fetch the repo and
update your base.
I can't hit the problem with for-6.9/io_uring, which make sense
because it's lacking the patch I'd blame it to. I'm confused
how you see it there.
> # git show for-6.9/io_uring:io_uring/io_uring.c | grep -A 30 io_fill_cqe_req_aux
> bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> u64 user_data = req->cqe.user_data;
> struct io_uring_cqe *cqe;
>
> lockdep_assert(!io_wq_current_is_worker());
>
> if (!defer)
> return __io_post_aux_cqe(ctx, user_data, res, cflags, false);
>
> lockdep_assert_held(&ctx->uring_lock);
>
> if (ctx->submit_state.cqes_count == ARRAY_SIZE(ctx->completion_cqes)) {
> ...
>
> # git show origin/for-6.10/io_uring:io_uring/io_uring.c | grep -A 13 io_req_post_cqe
> bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> bool posted;
>
> lockdep_assert(!io_wq_current_is_worker());
> lockdep_assert_held(&ctx->uring_lock);
>
> __io_cq_lock(ctx);
> posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
> ctx->submit_state.cq_flush = true;
> __io_cq_unlock_post(ctx);
> return posted;
> }
>
--
Pavel Begunkov
Powered by blists - more mailing lists