[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9E95BE27-2167-430F-9C7F-6D4A0E255FF3@linaro.org>
Date: Wed, 22 May 2019 11:12:55 +0200
From: Paolo Valente <paolo.valente@...aro.org>
To: "Srivatsa S. Bhat" <srivatsa@...il.mit.edu>
Cc: linux-fsdevel@...r.kernel.org,
linux-block <linux-block@...r.kernel.org>,
linux-ext4@...r.kernel.org, cgroups@...r.kernel.org,
kernel list <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Jan Kara <jack@...e.cz>,
jmoyer@...hat.com, Theodore Ts'o <tytso@....edu>,
amakhalov@...are.com, anishs@...are.com, srivatsab@...are.com
Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup
controller
> Il giorno 22 mag 2019, alle ore 11:02, Srivatsa S. Bhat <srivatsa@...il.mit.edu> ha scritto:
>
> On 5/22/19 1:05 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 22 mag 2019, alle ore 00:51, Srivatsa S. Bhat <srivatsa@...il.mit.edu> ha scritto:
>>>
>>> [ Resending this mail with a dropbox link to the traces (instead
>>> of a file attachment), since it didn't go through the last time. ]
>>>
>>> On 5/21/19 10:38 AM, Paolo Valente wrote:
>>>>
>>>>> So, instead of only sending me a trace, could you please:
>>>>> 1) apply this new patch on top of the one I attached in my previous email
>>>>> 2) repeat your test and report results
>>>>
>>>> One last thing (I swear!): as you can see from my script, I tested the
>>>> case low_latency=0 so far. So please, for the moment, do your test
>>>> with low_latency=0. You find the whole path to this parameter in,
>>>> e.g., my script.
>>>>
>>> No problem! :) Thank you for sharing patches for me to test!
>>>
>>> I have good news :) Your patch improves the throughput significantly
>>> when low_latency = 0.
>>>
>>> Without any patch:
>>>
>>> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync
>>> 10000+0 records in
>>> 10000+0 records out
>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 58.0915 s, 88.1 kB/s
>>>
>>>
>>> With both patches applied:
>>>
>>> dd if=/dev/zero of=/root/test0.img bs=512 count=10000 oflag=dsync
>>> 10000+0 records in
>>> 10000+0 records out
>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.87487 s, 1.3 MB/s
>>>
>>> The performance is still not as good as mq-deadline (which achieves
>>> 1.6 MB/s), but this is a huge improvement for BFQ nonetheless!
>>>
>>> A tarball with the trace output from the 2 scenarios you requested,
>>> one with only the debug patch applied (trace-bfq-add-logs-and-BUG_ONs),
>>> and another with both patches applied (trace-bfq-boost-injection) is
>>> available here:
>>>
>>> https://www.dropbox.com/s/pdf07vi7afido7e/bfq-traces.tar.gz?dl=0
>>>
>>
>> Hi Srivatsa,
>> I've seen the bugzilla you've created. I'm a little confused on how
>> to better proceed. Shall we move this discussion to the bugzilla, or
>> should we continue this discussion here, where it has started, and
>> then update the bugzilla?
>>
>
> Let's continue here on LKML itself.
Just done :)
> The only reason I created the
> bugzilla entry is to attach the tarball of the traces, assuming
> that it would allow me to upload a 20 MB file (since email attachment
> didn't work). But bugzilla's file restriction is much smaller than
> that, so it didn't work out either, and I resorted to using dropbox.
> So we don't need the bugzilla entry anymore; I might as well close it
> to avoid confusion.
>
No no, don't close it: it can reach people that don't use LKML. We
just have to remember to report back at the end of this. BTW, I also
think that the bug is incorrectly filed against 5.1, while all these
tests and results concern 5.2-rcX.
Thanks,
Paolo
> Regards,
> Srivatsa
> VMware Photon OS
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists