lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoi64Q0H9x2F35oY85PMDNq4hRsmqGYiP+En+qtQ4+Bag@mail.gmail.com>
Date:   Thu, 27 Oct 2016 20:13:08 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Jens Axboe <axboe@...nel.dk>
Cc:     Paolo Valente <paolo.valente@...aro.org>,
        Christoph Hellwig <hch@...radead.org>,
        Arnd Bergmann <arnd@...db.de>,
        Bart Van Assche <bart.vanassche@...disk.com>,
        Jan Kara <jack@...e.cz>, Tejun Heo <tj@...nel.org>,
        linux-block@...r.kernel.org,
        Linux-Kernal <linux-kernel@...r.kernel.org>,
        Linus Walleij <linus.walleij@...aro.org>,
        Mark Brown <broonie@...nel.org>,
        Hannes Reinecke <hare@...e.de>,
        Grant Likely <grant.likely@...retlab.ca>,
        James Bottomley <James.Bottomley@...senpartnership.com>
Subject: Re: [PATCH 00/14] introduce the BFQ-v0 I/O scheduler as an extra scheduler

On 27 October 2016 at 19:43, Jens Axboe <axboe@...nel.dk> wrote:
> On 10/27/2016 11:32 AM, Ulf Hansson wrote:
>>
>> [...]
>>
>>>
>>> I'm hesistant to add a new scheduler because it's very easy to add, very
>>> difficult to get rid of. If we do add BFQ as a legacy scheduler now,
>>> it'll take us years and years to get rid of it again. We should be
>>> moving towards LESS moving parts in the legacy path, not more.
>>
>>
>> Jens, I think you are wrong here and let me try to elaborate on why.
>>
>> 1)
>> We already have legacy schedulers like CFQ, DEADLINE, etc - and most
>> block device drivers are still using the legacy blk interface.
>
>
> I don't think that's an accurate statement. In terms of coverage, most
> drivers do support blk-mq. Anything SCSI, nvme, virtio-blk, SATA runs on
> (or can run on) top of blk-mq.

Well, I just used "git grep" and found that many drivers didn't use
blkmq. Apologize if I gave the wrong impressions.

>
>> To be able to remove the legacy blk layer, all block device drivers
>> must be converted to blkmq - of course.
>
>
> That's a given.
>
>> So to reach that goal, we will not only need to evolve blkmq to allow
>> scheduling (at least for single queue devices), but we also need to
>> convert *all* block device drivers to blkmq. For sure this will take
>> *years* and not months.
>
>
> Correct.
>
>> More important, when the transition to blkmq has been completed, then
>> there is absolutely no difference (from effort point of view) in
>> removing the legacy blk layer - no matter if we have BFQ in there or
>> not.
>>
>> I do understand if you have concern from maintenance point of view, as
>> I assume you would rather focus on evolving blkmq, than care about
>> legacy blk code. So, would it help if Paolo volunteers to maintain the
>> BFQ code in the meantime?
>
>
> We're obviously still maintaining the legacy IO path. But we don't want
> to actively develop it, and we haven't, for a long time.
>
> And Paolo maintaining it is a strict requirement for inclusion, legacy
> or blk-mq aside. That would go for both. I'd never accept a major
> feature from an individual or company if they weren't willing and
> capable of maintaining it. Throwing submissions over the wall is not
> viable.

That seems very reasonable!

>
>> 2)
>> While we work on evolving blkmq and convert block device drivers to
>> it, BFQ could as a separate legacy scheduler, help *lots* of Linux
>> users to get a significant improved experience. Should we really
>> prevent them from that? I think you block maintainer guys, really need
>> to consider this fact.
>
>
> You still seem to be basing that assumption on the notion that we have
> to convert tons of drivers for BFQ to make sense under the blk-mq
> umbrella. That's not the case.

Well, let's not argue about how many. It's pretty easy to check that.

Instead, what I can tell, as we have been looking into converting mmc
(which I maintains) and that is indeed a significant amount of work.
We will need to rip out all of the mmc request management, and most
likely we also need to extend the blkmq interface - as to be able to
do re-implement all the current request optimizations. We are looking
into this, but it just takes time.

I can imagine, that it's not always a straight forward "convert to blk
mq" patch for every block device driver.

>
>> 3)
>> While we work on scheduling in blkmq (at least for single queue
>> devices), it's of course important that we set high goals. Having BFQ
>> (and the other schedulers) in the legacy blk, provides a good
>> reference for what we could aim for.
>
>
> Sure, but you don't need BFQ to be included in the kernel for that.

Perhaps not.

But does that mean, you expect Paolo to maintain an up to date BFQ tree for you?

>
>>> We can keep having this discussion every few years, but I think we'd
>>> both prefer to make some actual progress here. It's perfectly fine to
>>> add an interface for a single queue interface for an IO scheduler for
>>> blk-mq, since we don't care too much about scalability there. And that
>>> won't take years, that should be a few weeks. Retrofitting BFQ on top of
>>> that should not be hard either. That can co-exist with a real multiqueue
>>> scheduler as well, something that's geared towards some fairness for
>>> faster devices.
>>
>>
>> That's really great news!
>>
>> I hope we get a possibility to meet and discuss the plans for this at
>> Kernel summit/Linux Plumbers the next week!
>
>
> I'll be there.

Great!

Kind regards
Ulf Hansson

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ