lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 20 Jun 2021 23:04:15 +0100
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Olivier Langlois <olivier@...llion01.com>,
        Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] io_uring: reduce latency by reissueing the operation

On 6/20/21 10:31 PM, Olivier Langlois wrote:
> On Sun, 2021-06-20 at 21:55 +0100, Pavel Begunkov wrote:
[...]
>>> creating a new worker is for sure not free but I would remove that
>>> cause from the suspect list as in my scenario, it was a one-shot
>>> event.
>>
>> Not sure what you mean, but speculating, io-wq may have not
>> optimal policy for recycling worker threads leading to
>> recreating/removing more than needed. Depends on bugs, use
>> cases and so on.
> 
> Since that I absolutely don't use the async workers feature I was
> obsessed about the fact that I was seeing a io worker created. This is
> root of why I ended up writing the patch.
> 
> My understanding of how io worker life scope are managed, it is that
> one remains present once created.

There was one(?) worker as such, and others should be able
to eventually die off if there is nothing to do. Something
may have changed after recent changes, I should update myself
on that

> In my scenario, once that single persistent io worker thread is
> created, no others are ever created. So this is a one shot cost. I was

Good to know, that's for confirming.


> prepared to eliminate the first measurement to be as fair as possible
> and not pollute the async performance result with a one time only
> thread creation cost but to my surprise... The thread creation cost was
> not visible in the first measurement time...
> 
> From that, maybe this is an erroneous shortcut, I do not feel that
> thread creation is the bottleneck.
>>
>>> First measurement was even not significantly higher than all the
>>> other
>>> measurements.
>>
>> You get a huge max for io-wq case. Obviously nothing can be
>> said just because of max. We'd need latency distribution
>> and probably longer runs, but I'm still curious where it's
>> coming from. Just keeping an eye in general
> 
> Maybe it is scheduling...
> 
> I'll keep this mystery in the back of my mind in case that I would end
> up with a way to find out where the time is spend...

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ