lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 May 2019 15:51:46 -0700
From:   "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
To:     Paolo Valente <paolo.valente@...aro.org>
Cc:     linux-fsdevel@...r.kernel.org,
        linux-block <linux-block@...r.kernel.org>,
        linux-ext4@...r.kernel.org, cgroups@...r.kernel.org,
        kernel list <linux-kernel@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Jan Kara <jack@...e.cz>,
        jmoyer@...hat.com, Theodore Ts'o <tytso@....edu>,
        amakhalov@...are.com, anishs@...are.com, srivatsab@...are.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup
 controller

[ Resending this mail with a dropbox link to the traces (instead
of a file attachment), since it didn't go through the last time. ]

On 5/21/19 10:38 AM, Paolo Valente wrote:
> 
>> So, instead of only sending me a trace, could you please:
>> 1) apply this new patch on top of the one I attached in my previous email
>> 2) repeat your test and report results
> 
> One last thing (I swear!): as you can see from my script, I tested the
> case low_latency=0 so far.  So please, for the moment, do your test
> with low_latency=0.  You find the whole path to this parameter in,
> e.g., my script.
> 
No problem! :) Thank you for sharing patches for me to test!

I have good news :) Your patch improves the throughput significantly
when low_latency = 0.

Without any patch:

dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s


With both patches applied:

dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
10000+0 records in
10000+0 records out
5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s

The performance is still not as good as mq-deadline (which achieves
1.6 MB/s), but this is a huge improvement for BFQ nonetheless!

A tarball with the trace output from the 2 scenarios you requested,
one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
and another with both patches applied (trace-bfq-boost-injection) is
available here:

https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0

Thank you!
 
Regards,
Srivatsa
VMware Photon OS

Powered by blists - more mailing lists