lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Dec 2016 08:33:01 -0700
From:   Jens Axboe <axboe@...com>
To:     Paolo Valente <paolo.valente@...aro.org>
CC:     <linux-block@...r.kernel.org>,
        Linux-Kernal <linux-kernel@...r.kernel.org>,
        Omar Sandoval <osandov@...com>,
        Linus Walleij <linus.walleij@...aro.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Mark Brown <broonie@...nel.org>
Subject: Re: [PATCHSET v4] blk-mq-scheduling framework

On 12/19/2016 08:20 AM, Jens Axboe wrote:
> On 12/19/2016 04:32 AM, Paolo Valente wrote:
>>
>>> Il giorno 17 dic 2016, alle ore 01:12, Jens Axboe <axboe@...com> ha scritto:
>>>
>>> This is version 4 of this patchset, version 3 was posted here:
>>>
>>> https://marc.info/?l=linux-block&m=148178513407631&w=2
>>>
>>> From the discussion last time, I looked into the feasibility of having
>>> two sets of tags for the same request pool, to avoid having to copy
>>> some of the request fields at dispatch and completion time. To do that,
>>> we'd have to replace the driver tag map(s) with our own, and augment
>>> that with tag map(s) on the side representing the device queue depth.
>>> Queuing IO with the scheduler would allocate from the new map, and
>>> dispatching would acquire the "real" tag. We would need to change
>>> drivers to do this, or add an extra indirection table to map a real
>>> tag to the scheduler tag. We would also need a 1:1 mapping between
>>> scheduler and hardware tag pools, or additional info to track it.
>>> Unless someone can convince me otherwise, I think the current approach
>>> is cleaner.
>>>
>>> I wasn't going to post v4 so soon, but I discovered a bug that led
>>> to drastically decreased merging. Especially on rotating storage,
>>> this release should be fast, and on par with the merging that we
>>> get through the legacy schedulers.
>>>
>>
>> I'm to modifying bfq.  You mentioned other missing pieces to come.  Do
>> you already have an idea of what they are, so that I am somehow
>> prepared to what won't work even if my changes are right?
> 
> I'm mostly talking about elevator ops hooks that aren't there in the new
> framework, but exist in the old one. There should be no hidden
> surprises, if that's what you are worried about.
> 
> On the ops side, the only ones I can think of are the activate and
> deactivate, and those can be done in the dispatch_request hook for
> activate, and put/requeue for deactivate.
> 
> Outside of that, some of them have been renamed, and some have been
> collapsed (like activate/deactivate), and yet others again work a little
> differently (like merging). See the mq-deadline conversion, and just
> work through them one at the time.

Some more details...

Outside of the differences outlined above, a major one is that the old
scheduler interfaces invoked almost all of the hooks with the device
queue lock held. That's no longer the case on the new framework, you
have to setup your own lock(s) for what you need. That's a lot saner.
One example is the attempt to merge a bio to an existing request, that
would be the ->bio_merge() hook. If you look at mq-deadline, the hook
merely grabs its per-queue lock (dd->lock) and calls a blk-mq-sched
helper to do the merging. That, in turn, will call ->request_merge(), so
that is called with the lock that ->bio_merge() grabs.

-- 
Jens Axboe


-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ