[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <686D6469-9DE7-4738-B92A-002144C3E63E@linaro.org>
Date: Wed, 22 May 2019 11:09:26 +0200
From: Paolo Valente <paolo.valente@...aro.org>
To: "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Cc: linux-fsdevel@...r.kernel.org,
linux-block <linux-block@...r.kernel.org>,
linux-ext4@...r.kernel.org, cgroups@...r.kernel.org,
kernel list <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Jan Kara <jack@...e.cz>,
jmoyer@...hat.com, Theodore Ts'o <tytso@....edu>,
amakhalov@...are.com, anishs@...are.com, srivatsab@...are.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup
controller
> Il giorno 22 mag 2019, alle ore 10:05, Paolo Valente <paolo.valente@...aro.org> ha scritto:
>
>
>
>> Il giorno 22 mag 2019, alle ore 00:51, Srivatsa S. Bhat <srivatsa@...il.mit.edu> ha scritto:
>>
>> [ Resending this mail with a dropbox link to the traces (instead
>> of a file attachment), since it didn't go through the last time. ]
>>
>> On 5/21/19 10:38 AM, Paolo Valente wrote:
>>>
>>>> So, instead of only sending me a trace, could you please:
>>>> 1) apply this new patch on top of the one I attached in my previous email
>>>> 2) repeat your test and report results
>>>
>>> One last thing (I swear!): as you can see from my script, I tested the
>>> case low_latency=0 so far. So please, for the moment, do your test
>>> with low_latency=0. You find the whole path to this parameter in,
>>> e.g., my script.
>>>
>> No problem! :) Thank you for sharing patches for me to test!
>>
>> I have good news :) Your patch improves the throughput significantly
>> when low_latency = 0.
>>
>> Without any patch:
>>
>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s
>>
>>
>> With both patches applied:
>>
>> dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
>> 10000+0 records in
>> 10000+0 records out
>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s
>>
>> The performance is still not as good as mq-deadline (which achieves
>> 1.6 MB/s), but this is a huge improvement for BFQ nonetheless!
>>
>> A tarball with the trace output from the 2 scenarios you requested,
>> one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
>> and another with both patches applied (trace-bfq-boost-injection) is
>> available here:
>>
>> https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0
>>
>
> Hi Srivatsa,
> I've seen the bugzilla you've created. I'm a little confused on how
> to better proceed. Shall we move this discussion to the bugzilla, or
> should we continue this discussion here, where it has started, and
> then update the bugzilla?
>
Ok, I've received some feedback on this point, and I'll continue the
discussion here. Then I'll report back on the bugzilla.
First, thank you very much for testing my patches, and, above all, for
sharing those huge traces!
According to the your traces, the residual 20% lower throughput that you
record is due to the fact that the BFQ injection mechanism takes a few
hundredths of seconds to stabilize, at the beginning of the workload.
During that setup time, the throughput is equal to the dreadful ~60-90 KB/s
that you see without this new patch. After that time, there
seems to be no loss according to the trace.
The problem is that a loss lasting only a few hundredths of seconds is
however not negligible for a write workload that lasts only 3-4
seconds. Could you please try writing a larger file?
In addition, I wanted to ask you whether you measured BFQ throughput
with traces disabled. This may make a difference.
After trying writing a larger file, you can try with low_latency on.
On my side, it causes results to become a little unstable across
repetitions (which is expected).
Thanks,
Paolo
> Let me know,
> Paolo
>
>> Thank you!
>>
>> Regards,
>> Srivatsa
>> VMware Photon OS
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists