[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e60ac354-76d6-7073-7b75-0a8ad04b3435@gmail.com>
Date: Mon, 13 Mar 2023 03:52:03 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [RFC 0/2] optimise local-tw task resheduling
On 3/12/23 15:31, Jens Axboe wrote:
> On 3/11/23 1:53?PM, Pavel Begunkov wrote:
>> On 3/11/23 20:45, Pavel Begunkov wrote:
>>> On 3/11/23 17:24, Jens Axboe wrote:
>>>> On 3/10/23 12:04?PM, Pavel Begunkov wrote:
>>>>> io_uring extensively uses task_work, but when a task is waiting
>>>>> for multiple CQEs it causes lots of rescheduling. This series
>>>>> is an attempt to optimise it and be a base for future improvements.
>>>>>
>>>>> For some zc network tests eventually waiting for a portion of
>>>>> buffers I've got 10x descrease in the number of context switches,
>>>>> which reduced the CPU consumption more than twice (17% -> 8%).
>>>>> It also helps storage cases, while running fio/t/io_uring against
>>>>> a low performant drive it got 2x descrease of the number of context
>>>>> switches for QD8 and ~4 times for QD32.
>>>>>
>>>>> Not for inclusion yet, I want to add an optimisation for when
>>>>> waiting for 1 CQE.
>>>>
>>>> Ran this on the usual peak benchmark, using IRQ. IOPS is around ~70M for
>>>> that, and I see context rates of around 8.1-8.3M/sec with the current
>>>> kernel.
>>>>
>>>> Applied the two patches, but didn't see much of a change? Performance is
>>>> about the same, and cx rate ditto. Confused... As you probably know,
>>>> this test waits for 32 ios at the time.
>>>
>>> If I'd to guess it already has perfect batching, for which case
>>> the patch does nothing. Maybe it's due to SSD coalescing +
>>> small ro I/O + consistency and small latencies of Optanes,
>>> or might be on the scheduling and the kernel side to be slow
>>> to react.
>>
>> And if that's that, I have to note that it's quite a sterile
>> case, the last time I asked the usual batching we're currently
>> getting for networking cases is 1-2.
>
> I can definitely see this being very useful for the more
> non-deterministic cases where "completions" come in more sporadically.
> But for the networking case, if this is eg receives, you'd trigger the
> wakeup anyway to do the actual receive? And then the cqe posting doesn't
> trigger another wakeup.
True, In my case zc send notifications were the culprit.
It's not in the series, it might be better to not wake eagerly recv
poll tw, it'll give time to accumulate more data. I'm a bit afraid
of exhausting recv queues this way, so I don't think it's applicable
by default.
--
Pavel Begunkov
Powered by blists - more mailing lists