[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <439e8a13-cb14-c955-ae98-30ed5490739b@gmail.com>
Date: Thu, 6 Oct 2016 07:57:37 -0400
From: "Austin S. Hemmelgarn" <ahferroin7@...il.com>
To: Mark Brown <broonie@...nel.org>,
Linus Walleij <linus.walleij@...aro.org>
Cc: Tejun Heo <tj@...nel.org>,
Paolo Valente <paolo.valente@...more.it>,
Shaohua Li <shli@...com>, Vivek Goyal <vgoyal@...hat.com>,
linux-block@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...com>, Kernel-team@...com,
jmoyer@...hat.com, Ulf Hansson <ulf.hansson@...aro.org>,
Hannes Reinecke <hare@...e.com>
Subject: Re: [PATCH V3 00/11] block-throttle: add .high limit
On 2016-10-06 07:03, Mark Brown wrote:
> On Thu, Oct 06, 2016 at 10:04:41AM +0200, Linus Walleij wrote:
>> On Tue, Oct 4, 2016 at 9:14 PM, Tejun Heo <tj@...nel.org> wrote:
>
>>> I get that bfq can be a good compromise on most desktop workloads and
>>> behave reasonably well for some server workloads with the slice
>>> expiration mechanism but it really isn't an IO resource partitioning
>>> mechanism.
>
>> Not just desktops, also Android phones.
>
>> So why not have BFQ as a separate scheduling policy upstream,
>> alongside CFQ, deadline and noop?
>
> Right.
>
>> We're already doing the per-usecase Kconfig thing for preemption.
>> But maybe somebody already hates that and want to get rid of it,
>> I don't know.
>
> Hannes also suggested going back to making BFQ a separate scheduler
> rather than replacing CFQ earlier, pointing out that it mitigates
> against the risks of changing CFQ substantially at this point (which
> seems to be the biggest issue here).
>
ISTR that the original argument for this approach essentially amounted
to: 'If it's so much better, why do we need both?'.
Such an argument is valid only if the new design is better in all
respects (which there isn't sufficient information to decide in this
case), or the negative aspects are worth the improvements (which is too
workload specific to decide for something like this).
Powered by blists - more mailing lists