lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <11FFD0AF-4A08-4565-B4BE-FF80EA5BB5E6@linaro.org>
Date:   Wed, 3 Oct 2018 19:22:20 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     'Paolo Valente' via bfq-iosched <bfq-iosched@...glegroups.com>
Cc:     Bart Van Assche <bvanassche@....org>, Jens Axboe <axboe@...nel.dk>,
        Linus Walleij <linus.walleij@...aro.org>,
        linux-block <linux-block@...r.kernel.org>,
        linux-mmc <linux-mmc@...r.kernel.org>,
        linux-mtd@...ts.infradead.org, Pavel Machek <pavel@....cz>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Richard Weinberger <richard@....at>,
        Artem Bityutskiy <dedekind1@...il.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Jan Kara <jack@...e.cz>, Andreas Herrmann <aherrmann@...e.com>,
        Mel Gorman <mgorman@...e.com>,
        Chunyan Zhang <zhang.chunyan@...aro.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Oleksandr Natalenko <oleksandr@...alenko.name>,
        Mark Brown <broonie@...nel.org>
Subject: Re: [PATCH] block: BFQ default for single queue devices



> Il giorno 03 ott 2018, alle ore 18:02, Paolo Valente <paolo.valente@...aro.org> ha scritto:
> 
> 
> 
>> Il giorno 03 ott 2018, alle ore 17:54, Bart Van Assche <bvanassche@....org> ha scritto:
>> 
>> On Wed, 2018-10-03 at 08:29 +0200, Paolo Valente wrote:
>>> [1] https://lkml.org/lkml/2017/2/21/791
>>> [2] http://algo.ing.unimo.it/people/paolo/disk_sched/results.php
>>> [3] https://lwn.net/Articles/763603/
>> 
>> From [2]: "BFQ loses about 18% with only random readers, because the number
>> of IOPS becomes so high that the execution time and parallel efficiency of
>> the schedulers becomes relevant." Since the number of I/O patterns for which
>> results are available on [2] is limited and since the number of devices for
>> which test results are available on [2] is limited (e.g. RAID is missing),
>> there might be other cases in which configuring BFQ as the default would
>> introduce a regression.
>> 
> 
> From [3]: none with throttling loses 80% of the throughput when used
> to control I/O. On any drive. And this is really only one example among a ton.
> 

I forgot to add that the same 80% loss happens with mq-deadline plus
throttling, sorry.  In addition, mq-deadline suffers from much more
than a 18% loss of throughput, w.r.t. bfq, exactly in the same figure
you cited, if there are random writes too.

> In addition, the test you mention, designed by me, was meant exactly
> to find and show the worst breaking point of BFQ.  If your main
> workload of interest is really made only of tens of parallel thread
> doing only sync random I/O, and you care only about throughput,
> without any concern for your system becoming so unresponsive to be
> unusable during the test, then, yes, mq-deadline is a better option
> for you.
> 

Some more detail on this.  The fact that bfq reaches a lower
throughput than none in this test is actually still puzzling me,
because the process rate of I/O with bfq is one order of magnitude
higher than the IOPS of this device.  So, I still don't understand
why, with bfq, the queue of the device does not get as full as with
none, and thus why the throughput with bfq is not the same as with
none.

To further test this issue, I replaced sync I/O with async I/O (with a
very high depth).  And, nonsensically (for me), throughput dropped
with both bfq and none!  I already meant to to report this issue,
after investigating it more.  Anyway, this is a different story w.r.t.
this thread.

Thanks,
Paolo


> So, are you really sure the balance is in favor of mq-deadline?
> 
> Thanks,
> Paolo
> 
>> I agree with Jens that it's best to leave it to the Linux distributors to
>> select a default I/O scheduler.
>> 
>> Bart.
> 
> -- 
> You received this message because you are subscribed to the Google Groups "bfq-iosched" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bfq-iosched+unsubscribe@...glegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ