lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <ae9f3887-5205-8aa8-afa7-4e01d03921bc@kernel.dk>
Date:   Mon, 31 Aug 2020 21:38:59 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Xin Yin <yinxin_1989@...yun.com>, viro@...iv.linux.org.uk
Cc:     linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] io_uring: Fix NULL pointer dereference in
 io_sq_wq_submit_work()

On 8/31/20 7:54 PM, Xin Yin wrote:
> the commit <1c4404efcf2c0> ("<io_uring: make sure async workqueue
> is canceled on exit>") caused a crash in io_sq_wq_submit_work().
> when io_ring-wq get a req form async_list, which may not have been
> added to task_list. Then try to delete the req from task_list will caused
> a "NULL pointer dereference".

Hmm, do you have a reproducer for this?

> @@ -2356,9 +2358,11 @@ static void io_sq_wq_submit_work(struct work_struct *work)
>   * running. We currently only allow this if the new request is sequential
>   * to the previous one we punted.
>   */
> -static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
> +static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req,
> +							struct io_ring_ctx *ctx)
>  {
>  	bool ret;
> +	unsigned long flags;
>  
>  	if (!list)
>  		return false;
> @@ -2378,6 +2382,13 @@ static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
>  		list_del_init(&req->list);
>  		ret = false;
>  	}
> +
> +	if (ret) {
> +		spin_lock_irqsave(&ctx->task_lock, flags);
> +		list_add(&req->task_list, &ctx->task_list);
> +		req->work_task = NULL;
> +		spin_unlock_irqrestore(&ctx->task_lock, flags);
> +	}
>  	spin_unlock(&list->lock);
>  	return ret;
>  }
> @@ -2454,7 +2465,7 @@ static int __io_queue_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
>  			s->sqe = sqe_copy;
>  			memcpy(&req->submit, s, sizeof(*s));
>  			list = io_async_list_from_req(ctx, req);
> -			if (!io_add_to_prev_work(list, req)) {
> +			if (!io_add_to_prev_work(list, req, ctx)) {
>  				if (list)
>  					atomic_inc(&list->cnt);
>  				INIT_WORK(&req->work, io_sq_wq_submit_work);
> 

ctx == req->ctx, so you should not need that change.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ