[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4eeefb43-488c-dc90-f47c-10defe6f9278@kernel.dk>
Date: Tue, 1 Sep 2020 08:52:51 -0600
From: Jens Axboe <axboe@...nel.dk>
To: yinxin_1989 <yinxin_1989@...yun.com>,
viro <viro@...iv.linux.org.uk>
Cc: linux-block <linux-block@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] io_uring: Fix NULL pointer dereference in
io_sq_wq_submit_work()
On 8/31/20 10:59 PM, yinxin_1989 wrote:
>
>>On 8/31/20 7:54 PM, Xin Yin wrote:
>>> the commit <1c4404efcf2c0> ("<io_uring: make sure async workqueue
>>> is canceled on exit>") caused a crash in io_sq_wq_submit_work().
>>> when io_ring-wq get a req form async_list, which may not have been
>>> added to task_list. Then try to delete the req from task_list will caused
>>> a "NULL pointer dereference".
>>
>>Hmm, do you have a reproducer for this?
>
> I update code to linux5.4.y , and I can reproduce this issue on an arm
> board and my x86 pc by fio tools.
Right, I figured this was 5.4 stable, as that's the only version that
has this patch.
> fio -filename=/home/yinxin/testfile -direct=0 -ioengine=io_uring -iodepth 128 -rw=read -bs=16K -size=1G -numjobs=1 -runtime=60 -group_reporting -name=iops
Gotcha, thanks!
>>> @@ -2356,9 +2358,11 @@ static void io_sq_wq_submit_work(struct work_struct *work)
>>> * running. We currently only allow this if the new request is sequential
>>> * to the previous one we punted.
>>> */
>>> -static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
>>> +static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req,
>>> + struct io_ring_ctx *ctx)
>>> {
>>> bool ret;
>>> + unsigned long flags;
>>>
>>> if (!list)
>>> return false;
>>> @@ -2378,6 +2382,13 @@ static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
>>> list_del_init(&req->list);
>>> ret = false;
>>> }
>>> +
>>> + if (ret) {
>>> + spin_lock_irqsave(&ctx->task_lock, flags);
>>> + list_add(&req->task_list, &ctx->task_list);
>>> + req->work_task = NULL;
>>> + spin_unlock_irqrestore(&ctx->task_lock, flags);
>>> + }
>>> spin_unlock(&list->lock);
>>> return ret;
>>> }
>>> @@ -2454,7 +2465,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>>> s->sqe = sqe_copy;
>>> memcpy(&req->submit, s, sizeof(*s));
>>> list = io_async_list_from_req(ctx, req);
>>> - if (!io_add_to_prev_work(list, req)) {
>>> + if (!io_add_to_prev_work(list, req, ctx)) {
>>> if (list)
>>> atomic_inc(&list->cnt);
>>> INIT_WORK(&req->work, io_sq_wq_submit_work);
>>>
>>ctx == req->ctx, so you should not need that change.
>
> In my test , the req have not been add to req->task_list(maybe waiting
> for the ctx->task_lock) , and in io_sq_wq_submit_work() try to delete
> it from req->task_list ,which will cause this issue.
Sure, but req->ctx is set when the req is initialized. If req->ctx !=
ctx here, then that would be pretty disastrous... So you can drop that
part of the patch.
Care to send with that changed? Then I'm fine with queueing this up for
5.4-stable. Thanks!
--
Jens Axboe
Powered by blists - more mailing lists