[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cb105638-2394-5eef-216f-9c6ff918ee59@gmail.com>
Date: Sat, 14 Sep 2019 13:11:08 +0300
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] Optimise io_uring completion waiting
It solves much of the problem, though still have overhead on traversing
a wait queue + indirect calls for checking.
I've been thinking to either
1. create n wait queues and bucketing waiter. E.g. log2(min_events)
bucketing would remove at least half of such calls for arbitary
min_events and all if min_events is pow2.
2. or dig deeper and add custom wake_up with perhaps sorted wait_queue.
As I see it, it's pretty bulky and over-engineered, but maybe somebody
knows an easier way?
Anyway, I don't have performance numbers for that, so don't know if this
would be justified.
On 14/09/2019 03:31, Jens Axboe wrote:
> On 9/13/19 4:28 PM, Pavel Begunkov (Silence) wrote:
>> From: Pavel Begunkov <asml.silence@...il.com>
>>
>> There could be a lot of overhead within generic wait_event_*() used for
>> waiting for large number of completions. The patchset removes much of
>> it by using custom wait event (wait_threshold).
>>
>> Synthetic test showed ~40% performance boost. (see patch 2)
>
> Nifty, from an io_uring perspective, I like this a lot.
>
> The core changes needed to support it look fine as well. I'll await
> Peter/Ingo's comments on it.
>
--
Yours sincerely,
Pavel Begunkov
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists