[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9639d0f7-f50d-c3f2-8e68-b208286af68f@kernel.dk>
Date: Thu, 30 Jun 2022 14:19:11 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Dylan Yudaken <dylany@...com>,
Pavel Begunkov <asml.silence@...il.com>,
io-uring@...r.kernel.org
Cc: Kernel-team@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 for-next 00/12] io_uring: multishot recv
On 6/30/22 3:12 AM, Dylan Yudaken wrote:
> This series adds support for multishot recv/recvmsg to io_uring.
>
> The idea is that generally socket applications will be continually
> enqueuing a new recv() when the previous one completes. This can be
> improved on by allowing the application to queue a multishot receive,
> which will post completions as and when data is available. It uses the
> provided buffers feature to receive new data into a pool provided by
> the application.
>
> This is more performant in a few ways:
> * Subsequent receives are queued up straight away without requiring the
> application to finish a processing loop.
> * If there are more data in the socket (sat the provided buffer
> size is smaller than the socket buffer) then the data is immediately
> returned, improving batching.
> * Poll is only armed once and reused, saving CPU cycles
>
> Running a small network benchmark [1] shows improved QPS of ~6-8% over
> a range of loads.
I have applied this, changing ->addr2 to ->ioprio for the flags bit as
per the io_uring-5.19 branch.
Pretty excited about recv multishot. I think it's an elegant model, and
it has really nice performance improvements as well!
--
Jens Axboe
Powered by blists - more mailing lists