lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7d533b7f-17e3-96e4-4fcb-f9bc4ce5e0ed@gmail.com>
Date:   Thu, 16 Mar 2023 12:25:52 +0000
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [RFC 0/2] optimise local-tw task resheduling

On 3/11/23 17:24, Jens Axboe wrote:
> On 3/10/23 12:04?PM, Pavel Begunkov wrote:
>> io_uring extensively uses task_work, but when a task is waiting
>> for multiple CQEs it causes lots of rescheduling. This series
>> is an attempt to optimise it and be a base for future improvements.
>>
>> For some zc network tests eventually waiting for a portion of
>> buffers I've got 10x descrease in the number of context switches,
>> which reduced the CPU consumption more than twice (17% -> 8%).
>> It also helps storage cases, while running fio/t/io_uring against
>> a low performant drive it got 2x descrease of the number of context
>> switches for QD8 and ~4 times for QD32.
>>
>> Not for inclusion yet, I want to add an optimisation for when
>> waiting for 1 CQE.
> 
> Ran this on the usual peak benchmark, using IRQ. IOPS is around ~70M for
> that, and I see context rates of around 8.1-8.3M/sec with the current
> kernel.

Tried it out. No difference with bs=512, qd=4 is completed before
it gets to schedule() in io_cqring_wait(). With QD32, it's local tw run
__io_run_local_work() spins 2 loops, and QD=8 somewhat in the middle
with rare extra sched.

For bs=4096 QD=8 I see a lot of:

io_cqring_wait() @min_events=8
schedule()
__io_run_local_work() nr=4
schedule()
__io_run_local_work() nr=4


And if we benchmark without and with the patch there is a nice
CPU util reduction.

CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
   0    1.18    0.00   19.24    0.00    0.00    0.00    0.00    0.00    0.00   79.57
   0    1.63    0.00   29.38    0.00    0.00    0.00    0.00    0.00    0.00   68.98

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ