lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f4b11315-144c-c67d-5143-50b5be950ede@csail.mit.edu>
Date:   Wed, 22 May 2019 02:02:33 -0700
From:   "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
To:     Paolo Valente <paolo.valente@...aro.org>
Cc:     linux-fsdevel@...r.kernel.org,
        linux-block <linux-block@...r.kernel.org>,
        linux-ext4@...r.kernel.org, cgroups@...r.kernel.org,
        kernel list <linux-kernel@...r.kernel.org>,
        Jens Axboe <axboe@...nel.dk>, Jan Kara <jack@...e.cz>,
        jmoyer@...hat.com, Theodore Ts'o <tytso@....edu>,
        amakhalov@...are.com, anishs@...are.com, srivatsab@...are.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup
 controller

On 5/22/19 1:05 AM, Paolo Valente wrote:
> 
> 
>> Il giorno 22 mag 2019, alle ore 00:51, Srivatsa S. Bhat <srivatsa@...il.mit.edu> ha scritto:
>>
>> [ Resending this mail with a dropbox link to the traces (instead
>> of a file attachment), since it didn't go through the last time. ]
>>
>> On 5/21/19 10:38 AM, Paolo Valente wrote:
>>>
>>>> So, instead of only sending me a trace, could you please:
>>>> 1) apply this new patch on top of the one I attached in my previous email
>>>> 2) repeat your test and report results
>>>
>>> One last thing (I swear!): as you can see from my script, I tested the
>>> case low_latency=0 so far.  So please, for the moment, do your test
>>> with low_latency=0.  You find the whole path to this parameter in,
>>> e.g., my script.
>>>
>> No problem! :) Thank you for sharing patches for me to test!
>>
>> I have good news :) Your patch improves the throughput significantly
>> when low_latency = 0.
>>
>> Without any patch:
>>
>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s
>>
>>
>> With both patches applied:
>>
>> dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s
>>
>> The performance is still not as good as mq-deadline (which achieves
>> 1.6 MB/s), but this is a huge improvement for BFQ nonetheless!
>>
>> A tarball with the trace output from the 2 scenarios you requested,
>> one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
>> and another with both patches applied (trace-bfq-boost-injection) is
>> available here:
>>
>> https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0
>>
> 
> Hi Srivatsa,
> I've seen the bugzilla you've created.  I'm a little confused on how
> to better proceed.  Shall we move this discussion to the bugzilla, or
> should we continue this discussion here, where it has started, and
> then update the bugzilla?
> 

Let's continue here on LKML itself. The only reason I created the
bugzilla entry is to attach the tarball of the traces, assuming
that it would allow me to upload a 20 MB file (since email attachment
didn't work). But bugzilla's file restriction is much smaller than
that, so it didn't work out either, and I resorted to using dropbox.
So we don't need the bugzilla entry anymore; I might as well close it
to avoid confusion.

Regards,
Srivatsa
VMware Photon OS

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ