[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200923114419.71218-6-songmuchun@bytedance.com>
Date: Wed, 23 Sep 2020 19:44:19 +0800
From: Muchun Song <songmuchun@...edance.com>
To: axboe@...nel.dk, viro@...iv.linux.org.uk
Cc: linux-fsdevel@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, zhuyinyin@...edance.com,
Muchun Song <songmuchun@...edance.com>,
Jiachen Zhang <zhangjiachen.jaycee@...edance.com>
Subject: [PATCH v2 5/5] io_uring: Fix double list add in io_queue_async_work()
If we queue work in io_poll_wake(), it will leads to list double
add. So we should add the list when the callback func is the
io_sq_wq_submit_work.
The following oops was seen:
list_add double add: new=ffff9ca6a8f1b0e0, prev=ffff9ca62001cee8,
next=ffff9ca6a8f1b0e0.
------------[ cut here ]------------
kernel BUG at lib/list_debug.c:31!
Call Trace:
<IRQ>
io_poll_wake+0xf3/0x230
__wake_up_common+0x91/0x170
__wake_up_common_lock+0x7a/0xc0
io_commit_cqring+0xea/0x280
? blkcg_iolatency_done_bio+0x2b/0x610
io_cqring_add_event+0x3e/0x60
io_complete_rw+0x58/0x80
dio_complete+0x106/0x250
blk_update_request+0xa0/0x3b0
blk_mq_end_request+0x1a/0x110
blk_mq_complete_request+0xd0/0xe0
nvme_irq+0x129/0x270 [nvme]
__handle_irq_event_percpu+0x7b/0x190
handle_irq_event_percpu+0x30/0x80
handle_irq_event+0x3c/0x60
handle_edge_irq+0x91/0x1e0
do_IRQ+0x4d/0xd0
common_interrupt+0xf/0xf
Fixes: 1c4404efcf2c ("io_uring: make sure async workqueue is canceled on exit")
Reported-by: Jiachen Zhang <zhangjiachen.jaycee@...edance.com>
Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
fs/io_uring.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c65f78f395655..a7cfe976480d8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -513,12 +513,14 @@ static inline void io_queue_async_work(struct io_ring_ctx *ctx,
}
}
- req->files = current->files;
+ if (req->work.func == io_sq_wq_submit_work) {
+ req->files = current->files;
- spin_lock_irqsave(&ctx->task_lock, flags);
- list_add(&req->task_list, &ctx->task_list);
- req->work_task = NULL;
- spin_unlock_irqrestore(&ctx->task_lock, flags);
+ spin_lock_irqsave(&ctx->task_lock, flags);
+ list_add(&req->task_list, &ctx->task_list);
+ req->work_task = NULL;
+ spin_unlock_irqrestore(&ctx->task_lock, flags);
+ }
queue_work(ctx->sqo_wq[rw], &req->work);
}
@@ -667,6 +669,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
state->cur_req++;
}
+ INIT_LIST_HEAD(&req->task_list);
req->file = NULL;
req->ctx = ctx;
req->flags = 0;
--
2.11.0
Powered by blists - more mailing lists