lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cdd4bf56-5a08-5d28-969f-81b70cc3c473@gmail.com>
Date:   Fri, 19 Jun 2020 17:12:17 +0300
From:   Pavel Begunkov <asml.silence@...il.com>
To:     io-uring@...r.kernel.org, Jens Axboe <axboe@...nel.dk>
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, akpm@...ux-foundation.org
Subject: Re: [PATCH 04/15] io_uring: re-issue block requests that failed
 because of resources

On 18/06/2020 17:43, Jens Axboe wrote:
> Mark the plug with nowait == true, which will cause requests to avoid
> blocking on request allocation. If they do, we catch them and reissue
> them from a task_work based handler.
> 
> Normally we can catch -EAGAIN directly, but the hard case is for split
> requests. As an example, the application issues a 512KB request. The
> block core will split this into 128KB if that's the max size for the
> device. The first request issues just fine, but we run into -EAGAIN for
> some latter splits for the same request. As the bio is split, we don't
> get to see the -EAGAIN until one of the actual reads complete, and hence
> we cannot handle it inline as part of submission.
> 
> This does potentially cause re-reads of parts of the range, as the whole
> request is reissued. There's currently no better way to handle this.
> 
> Signed-off-by: Jens Axboe <axboe@...nel.dk>
> ---
>  fs/io_uring.c | 148 ++++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 124 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 2e257c5a1866..40413fb9d07b 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -900,6 +900,13 @@ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
>  static void __io_queue_sqe(struct io_kiocb *req,
>  			   const struct io_uring_sqe *sqe);
>  
...> +
> +static void io_rw_resubmit(struct callback_head *cb)
> +{
> +	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
> +	struct io_ring_ctx *ctx = req->ctx;
> +	int err;
> +
> +	__set_current_state(TASK_RUNNING);
> +
> +	err = io_sq_thread_acquire_mm(ctx, req);
> +
> +	if (io_resubmit_prep(req, err)) {
> +		refcount_inc(&req->refs);
> +		io_queue_async_work(req);
> +	}

Hmm, I have similar stuff but for iopoll. On top removing grab_env* for
linked reqs and some extra. I think I'll rebase on top of this.

> +}
> +#endif
> +
> +static bool io_rw_reissue(struct io_kiocb *req, long res)
> +{
> +#ifdef CONFIG_BLOCK
> +	struct task_struct *tsk;
> +	int ret;
> +
> +	if ((res != -EAGAIN && res != -EOPNOTSUPP) || io_wq_current_is_worker())
> +		return false;
> +
> +	tsk = req->task;
> +	init_task_work(&req->task_work, io_rw_resubmit);
> +	ret = task_work_add(tsk, &req->task_work, true);

I don't like that the request becomes un-discoverable for cancellation
awhile sitting in the task_work list. Poll stuff at least have hash_node
for that.

> +	if (!ret)
> +		return true;
> +#endif
> +	return false;
> +}
> +

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ