[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe4cb05b-2fd0-25dd-e120-9a9d57659008@gmail.com>
Date: Sun, 1 Mar 2020 19:23:14 +0300
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 0/9] nxt propagation + locking optimisation
On 01/03/2020 19:18, Pavel Begunkov wrote:
> There are several independent parts in the patchset, but bundled
> to make a point.
> 1-2: random stuff, that implicitly used later.
> 3-5: restore @nxt propagation
> 6-8: optimise locking in io_worker_handle_work()
> 9: optimise io_uring refcounting
>
> The next propagation bits are done similarly as it was before, but
> - nxt stealing is now at top-level, but not hidden in handlers
> - ensure there is no with REQ_F_DONT_STEAL_NEXT
>
> [6-8] is the reason to dismiss the previous @nxt propagation appoach,
> I didn't found a good way to do the same. Even though it looked
> clearer and without new flag.
>
> Performance tested it with link-of-nops + IOSQE_ASYNC:
>
> link size: 100
> orig: 501 (ns per nop)
> 0-8: 446
> 0-9: 416
>
> link size: 10
> orig: 826
> 0-8: 776
> 0-9: 756
BTW, that's basically QD1, and with contention for wqe->lock the gap should be
even wider.
>
> Pavel Begunkov (9):
> io_uring: clean up io_close
> io-wq: fix IO_WQ_WORK_NO_CANCEL cancellation
> io_uring: make submission ref putting consistent
> io_uring: remove @nxt from handlers
> io_uring: get next req on subm ref drop
> io-wq: shuffle io_worker_handle_work() code
> io-wq: io_worker_handle_work() optimise locking
> io-wq: optimise double lock for io_get_next_work()
> io_uring: pass submission ref to async
>
> fs/io-wq.c | 162 ++++++++++++----------
> fs/io_uring.c | 366 ++++++++++++++++++++++----------------------------
> 2 files changed, 258 insertions(+), 270 deletions(-)
>
--
Pavel Begunkov
Powered by blists - more mailing lists