lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 May 2019 10:05:46 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Cc:     linux-fsdevel@...r.kernel.org,
        linux-block <linux-block@...r.kernel.org>,
        linux-ext4@...r.kernel.org, cgroups@...r.kernel.org,
        kernel list <linux-kernel@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Jan Kara <jack@...e.cz>,
        jmoyer@...hat.com, Theodore Ts'o <tytso@....edu>,
        amakhalov@...are.com, anishs@...are.com, srivatsab@...are.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup
 controller



> Il giorno 22 mag 2019, alle ore 00:51, Srivatsa S. Bhat <srivatsa@...il.mit.edu> ha scritto:
> 
> [ Resending this mail with a dropbox link to the traces (instead
> of a file attachment), since it didn't go through the last time. ]
> 
> On 5/21/19 10:38 AM, Paolo Valente wrote:
>> 
>>> So, instead of only sending me a trace, could you please:
>>> 1) apply this new patch on top of the one I attached in my previous email
>>> 2) repeat your test and report results
>> 
>> One last thing (I swear!): as you can see from my script, I tested the
>> case low_latency=0 so far.  So please, for the moment, do your test
>> with low_latency=0.  You find the whole path to this parameter in,
>> e.g., my script.
>> 
> No problem! :) Thank you for sharing patches for me to test!
> 
> I have good news :) Your patch improves the throughput significantly
> when low_latency = 0.
> 
> Without any patch:
> 
> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
> 10000+0 records in
> 10000+0 records out
> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s
> 
> 
> With both patches applied:
> 
> dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
> 10000+0 records in
> 10000+0 records out
> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s
> 
> The performance is still not as good as mq-deadline (which achieves
> 1.6 MB/s), but this is a huge improvement for BFQ nonetheless!
> 
> A tarball with the trace output from the 2 scenarios you requested,
> one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
> and another with both patches applied (trace-bfq-boost-injection) is
> available here:
> 
> https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0
> 

Hi Srivatsa,
I've seen the bugzilla you've created.  I'm a little confused on how
to better proceed.  Shall we move this discussion to the bugzilla, or
should we continue this discussion here, where it has started, and
then update the bugzilla?

Let me know,
Paolo

> Thank you!
> 
> Regards,
> Srivatsa
> VMware Photon OS


Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists