lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 May 2018 16:22:02 +0800
From:   Ming Lei <tom.leiming@...il.com>
To:     "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc:     Omar Sandoval <osandov@...ndov.com>, Jens Axboe <axboe@...nel.dk>,
        linux-block <linux-block@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] block: kyber: make kyber more friendly with merging

On Wed, May 23, 2018 at 9:47 AM, jianchao.wang
<jianchao.w.wang@...cle.com> wrote:
> Hi Omar
>
> Thanks for your kindly response.
>
> On 05/23/2018 04:02 AM, Omar Sandoval wrote:
>> On Tue, May 22, 2018 at 10:48:29PM +0800, Jianchao Wang wrote:
>>> Currently, kyber is very unfriendly with merging. kyber depends
>>> on ctx rq_list to do merging, however, most of time, it will not
>>> leave any requests in ctx rq_list. This is because even if tokens
>>> of one domain is used up, kyber will try to dispatch requests
>>> from other domain and flush the rq_list there.
>>
>> That's a great catch, I totally missed this.
>>
>> This approach does end up duplicating a lot of code with the blk-mq core
>> even after Jens' change, so I'm curious if you tried other approaches.
>> One idea I had is to try the bio merge against the kqd->rqs lists. Since
>> that's per-queue, the locking overhead might be too high. Alternatively,
>
> Yes, I used to make a patch as you say, try the bio merge against kqd->rqs directly.
> The patch looks even simpler. However, because the khd->lock is needed every time
> when try bio merge, there maybe high contending overhead on hkd->lock when cpu-hctx
> mapping is not 1:1.
>
>> you could keep the software queues as-is but add our own version of
>> flush_busy_ctxs() that only removes requests of the domain that we want.
>> If one domain gets backed up, that might get messy with long iterations,
>> though.
>
> Yes, I also considered this approach :)
> But the long iterations on every ctx->rq_list looks really inefficient.

Right, this list can be quite long if dispatch token is used up.

You might try to introduce per-domain list into ctx directly, then 'none'
may benefit from this change too since bio merge should be done
on the per-domain list actually.


Thanks,
Ming Lei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ