[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c85e28df251d4c66a511dc157b795b13@AcuMS.aculab.com>
Date: Fri, 25 Jun 2021 08:15:10 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Jens Axboe' <axboe@...nel.dk>,
Olivier Langlois <olivier@...llion01.com>,
Pavel Begunkov <asml.silence@...il.com>,
"io-uring@...r.kernel.org" <io-uring@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v4] io_uring: reduce latency by reissueing the operation
From: Jens Axboe
> Sent: 25 June 2021 01:45
>
> On 6/22/21 6:17 AM, Olivier Langlois wrote:
> > It is quite frequent that when an operation fails and returns EAGAIN,
> > the data becomes available between that failure and the call to
> > vfs_poll() done by io_arm_poll_handler().
> >
> > Detecting the situation and reissuing the operation is much faster
> > than going ahead and push the operation to the io-wq.
> >
> > Performance improvement testing has been performed with:
> > Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
> >
> > 4 measurements have been taken:
> > 1. The time it takes to process a read request when data is already available
> > 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data
> was available
> > 3. The time it takes to execute io_queue_async_work()
> > 4. The time it takes to complete a read request asynchronously
> >
> > 2.25% of all the read operations did use the new path.
How much slower is it when the data to complete the read isn't
available?
I suspect there are different workflows where that is almost
always true.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists