[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <13383a9d-2bf0-1542-bc05-cfa00975e921@kernel.dk>
Date: Fri, 12 May 2023 07:58:50 -0600
From: Jens Axboe <axboe@...nel.dk>
To: luhongfei <luhongfei@...o.com>,
Pavel Begunkov <asml.silence@...il.com>,
"open list:IO_URING" <io-uring@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Cc: opensource.kernel@...o.com
Subject: Re: [PATCH] Subject: io_uring: Fix bug in io_fallback_req_func that
can cause deadlock
On 5/12/23 3:56?AM, luhongfei wrote:
> There was a bug in io_fallback_req_func that can cause deadlocks
> because uring_lock was not released when return.
> This patch releases the uring_lock before return.
>
> Signed-off-by: luhongfei <luhongfei@...o.com>
> ---
> io_uring/io_uring.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
> mode change 100644 => 100755 io_uring/io_uring.c
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 3bca7a79efda..1af793c7b3da
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -252,8 +252,10 @@ static __cold void io_fallback_req_func(struct work_struct *work)
> mutex_lock(&ctx->uring_lock);
> llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
> req->io_task_work.func(req, &ts);
> - if (WARN_ON_ONCE(!ts.locked))
> + if (WARN_ON_ONCE(!ts.locked)) {
> + mutex_unlock(&ctx->uring_lock);
> return;
> + }
> io_submit_flush_completions(ctx);
> mutex_unlock(&ctx->uring_lock);
> }
I'm guessing you found this by reading the code, and didn't actually hit
it? Because it looks fine as-is. We lock the ctx->uring_lock, and set
ts.locked == true. If ts.locked is false, then someone unlocked the ring
further down, which is unexpected (hence the WARN_ON_ONCE()). But if
that did happen, then we definitely don't want to unlock it again.
Because of that, I don't think you're patch is correct.
--
Jens Axboe
Powered by blists - more mailing lists