[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <105a78f0-407f-09e3-5951-7f76756762b2@gmail.com>
Date: Fri, 19 Jun 2020 17:30:16 +0300
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org
Subject: Re: [PATCH 04/15] io_uring: re-issue block requests that failed
because of resources
On 19/06/2020 17:22, Jens Axboe wrote:
> On 6/19/20 8:12 AM, Pavel Begunkov wrote:
>> On 18/06/2020 17:43, Jens Axboe wrote:
>>> Mark the plug with nowait == true, which will cause requests to avoid
>>> blocking on request allocation. If they do, we catch them and reissue
>>> them from a task_work based handler.
>>>
>>> Normally we can catch -EAGAIN directly, but the hard case is for split
>>> requests. As an example, the application issues a 512KB request. The
>>> block core will split this into 128KB if that's the max size for the
>>> device. The first request issues just fine, but we run into -EAGAIN for
>>> some latter splits for the same request. As the bio is split, we don't
>>> get to see the -EAGAIN until one of the actual reads complete, and hence
>>> we cannot handle it inline as part of submission.
>>>
>>> This does potentially cause re-reads of parts of the range, as the whole
>>> request is reissued. There's currently no better way to handle this.
>>>
>>> Signed-off-by: Jens Axboe <axboe@...nel.dk>
>>> ---
>>> fs/io_uring.c | 148 ++++++++++++++++++++++++++++++++++++++++++--------
>>> 1 file changed, 124 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index 2e257c5a1866..40413fb9d07b 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -900,6 +900,13 @@ static int io_file_get(struct io_submit_state *state, struct io_kiocb *req,
>>> static void __io_queue_sqe(struct io_kiocb *req,
>>> const struct io_uring_sqe *sqe);
>>>
>> ...> +
>>> +static void io_rw_resubmit(struct callback_head *cb)
>>> +{
>>> + struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
>>> + struct io_ring_ctx *ctx = req->ctx;
>>> + int err;
>>> +
>>> + __set_current_state(TASK_RUNNING);
>>> +
>>> + err = io_sq_thread_acquire_mm(ctx, req);
>>> +
>>> + if (io_resubmit_prep(req, err)) {
>>> + refcount_inc(&req->refs);
>>> + io_queue_async_work(req);
>>> + }
>>
>> Hmm, I have similar stuff but for iopoll. On top removing grab_env* for
>> linked reqs and some extra. I think I'll rebase on top of this.
>
> Yes, there's certainly overlap there. I consider this series basically
> wrapped up, so feel free to just base on top of it.
>
>>> +static bool io_rw_reissue(struct io_kiocb *req, long res)
>>> +{
>>> +#ifdef CONFIG_BLOCK
>>> + struct task_struct *tsk;
>>> + int ret;
>>> +
>>> + if ((res != -EAGAIN && res != -EOPNOTSUPP) || io_wq_current_is_worker())
>>> + return false;
>>> +
>>> + tsk = req->task;
>>> + init_task_work(&req->task_work, io_rw_resubmit);
>>> + ret = task_work_add(tsk, &req->task_work, true);
>>
>> I don't like that the request becomes un-discoverable for cancellation
>> awhile sitting in the task_work list. Poll stuff at least have hash_node
>> for that.
>
> Async buffered IO was never cancelable, so it doesn't really matter.
> It's tied to the task, so we know it'll get executed - either run, or
> canceled if the task is going away. This is really not that different
> from having the work discoverable through io-wq queueing before, since
> the latter could never be canceled anyway as it sits there
> uninterruptibly waiting for IO completion.
Makes sense. I was thinking about using this task-requeue for all kinds
of requests. Though, instead of speculating it'd be better for me to embody
ideas into patches and see.
--
Pavel Begunkov
Powered by blists - more mailing lists