[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c76b558c-bc12-3247-2bb5-986e5ffcba1f@oracle.com>
Date: Tue, 9 Apr 2019 17:46:45 +0800
From: "jianchao.wang" <jianchao.w.wang@...cle.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: axboe@...nel.dk, viro@...iv.linux.org.uk,
linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2] io_uring: introduce inline reqs for
IORING_SETUP_IOPOLL
Hi Christoph
On 4/9/19 5:37 PM, Christoph Hellwig wrote:
> On Tue, Apr 09, 2019 at 01:21:54PM +0800, Jianchao Wang wrote:
>> For the IORING_SETUP_IOPOLL case, all of the submission and
>> completion are handled under ctx->uring_lock or in SQ poll thread
>> context, so io_get_req and io_put_req has been serialized well.
>> The only exception is the asynchronous workqueue context where could
>> free the io_kiocb for error. To overcome this, we allocate a new
>> io_kiocb and free the previous inlined one.
>>
>> Based on this, we introduce the preallocated reqs list per ctx and
>> needn't to provide any lock to serialize the updating of list. The
>> performacne benefits from this. The test result of following fio
>> command
>
> I really don't like the idea of exposing this to userspace. Is
> there any good reason to not simply always allocate inline request
> up to a certain ring size?
>
Sorry, I cannot get your point.
There is nothing exposed to userspace. We will try to allocated fixed 128
per-ctx preallocated reqs if IORING_SETUP_IOPOLL. When this inlined reqs
are used up, it will allocated reqs in old fashion.
Thanks
Jianchao
Powered by blists - more mailing lists