[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <C7FEB750-D993-44EC-B3CB-0639DF68CD26@linaro.org>
Date: Wed, 3 Oct 2018 18:02:36 +0200
From: Paolo Valente <paolo.valente@...aro.org>
To: Bart Van Assche <bvanassche@....org>
Cc: Jens Axboe <axboe@...nel.dk>,
Linus Walleij <linus.walleij@...aro.org>,
linux-block <linux-block@...r.kernel.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
linux-mtd@...ts.infradead.org, Pavel Machek <pavel@....cz>,
Ulf Hansson <ulf.hansson@...aro.org>,
Richard Weinberger <richard@....at>,
Artem Bityutskiy <dedekind1@...il.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Jan Kara <jack@...e.cz>, Andreas Herrmann <aherrmann@...e.com>,
Mel Gorman <mgorman@...e.com>,
Chunyan Zhang <zhang.chunyan@...aro.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
'Paolo Valente' via bfq-iosched
<bfq-iosched@...glegroups.com>,
Oleksandr Natalenko <oleksandr@...alenko.name>,
Mark Brown <broonie@...nel.org>
Subject: Re: [PATCH] block: BFQ default for single queue devices
> Il giorno 03 ott 2018, alle ore 17:54, Bart Van Assche <bvanassche@....org> ha scritto:
>
> On Wed, 2018-10-03 at 08:29 +0200, Paolo Valente wrote:
>> [1] https://lkml.org/lkml/2017/2/21/791
>> [2] http://algo.ing.unimo.it/people/paolo/disk_sched/results.php
>> [3] https://lwn.net/Articles/763603/
>
> From [2]: "BFQ loses about 18% with only random readers, because the number
> of IOPS becomes so high that the execution time and parallel efficiency of
> the schedulers becomes relevant." Since the number of I/O patterns for which
> results are available on [2] is limited and since the number of devices for
> which test results are available on [2] is limited (e.g. RAID is missing),
> there might be other cases in which configuring BFQ as the default would
> introduce a regression.
>
From [3]: none with throttling loses 80% of the throughput when used
to control I/O. On any drive. And this is really only one example among a ton.
In addition, the test you mention, designed by me, was meant exactly
to find and show the worst breaking point of BFQ. If your main
workload of interest is really made only of tens of parallel thread
doing only sync random I/O, and you care only about throughput,
without any concern for your system becoming so unresponsive to be
unusable during the test, then, yes, mq-deadline is a better option
for you.
So, are you really sure the balance is in favor of mq-deadline?
Thanks,
Paolo
> I agree with Jens that it's best to leave it to the Linux distributors to
> select a default I/O scheduler.
>
> Bart.
Powered by blists - more mailing lists