[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201228125045.774596353@linuxfoundation.org>
Date: Mon, 28 Dec 2020 13:48:36 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>,
Pavel Begunkov <asml.silence@...il.com>,
Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 5.10 518/717] io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
From: Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>
commit c07e6719511e77c4b289f62bfe96423eb6ea061d upstream.
io_iopoll_complete() does not hold completion_lock to complete polled io,
so in io_wq_submit_work(), we can not call io_req_complete() directly, to
complete polled io, otherwise there maybe concurrent access to cqring,
defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
let io_iopoll_complete() complete polled io") has fixed this issue, but
Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not
good.
Given that io_iopoll_complete() is always called under uring_lock, so here
for polled io, we can also get uring_lock to fix this issue.
Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
Cc: <stable@...r.kernel.org> # 5.5+
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence@...il.com>
[axboe: don't deref 'req' after completing it']
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/io_uring.c | 29 +++++++++++++++++++----------
1 file changed, 19 insertions(+), 10 deletions(-)
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6081,19 +6081,28 @@ static struct io_wq_work *io_wq_submit_w
}
if (ret) {
+ struct io_ring_ctx *lock_ctx = NULL;
+
+ if (req->ctx->flags & IORING_SETUP_IOPOLL)
+ lock_ctx = req->ctx;
+
/*
- * io_iopoll_complete() does not hold completion_lock to complete
- * polled io, so here for polled io, just mark it done and still let
- * io_iopoll_complete() complete it.
+ * io_iopoll_complete() does not hold completion_lock to
+ * complete polled io, so here for polled io, we can not call
+ * io_req_complete() directly, otherwise there maybe concurrent
+ * access to cqring, defer_list, etc, which is not safe. Given
+ * that io_iopoll_complete() is always called under uring_lock,
+ * so here for polled io, we also get uring_lock to complete
+ * it.
*/
- if (req->ctx->flags & IORING_SETUP_IOPOLL) {
- struct kiocb *kiocb = &req->rw.kiocb;
+ if (lock_ctx)
+ mutex_lock(&lock_ctx->uring_lock);
+
+ req_set_fail_links(req);
+ io_req_complete(req, ret);
- kiocb_done(kiocb, ret, NULL);
- } else {
- req_set_fail_links(req);
- io_req_complete(req, ret);
- }
+ if (lock_ctx)
+ mutex_unlock(&lock_ctx->uring_lock);
}
return io_steal_work(req);
Powered by blists - more mailing lists