lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201229011517.GA3355551@T590>
Date:   Tue, 29 Dec 2020 09:15:17 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     "yukuai (C)" <yukuai3@...wei.com>
Cc:     axboe@...nel.dk, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
        zhangxiaoxu5@...wei.com
Subject: Re: [PATCH 1/3] blk-mq: allow hardware queue to get more tag while
 sharing a tag set

On Mon, Dec 28, 2020 at 05:02:50PM +0800, yukuai (C) wrote:
> Hi
> 
> On 2020/12/28 16:28, Ming Lei wrote:
> > Another candidate solution may be to always return true from hctx_may_queue()
> > for this kind of queue because queue_depth has provided fair allocation for
> > each LUN, and looks not necessary to do that again.
> 
> If always return true from hctx_may_queue() in this case, for example,
> we set queue_depth to 128(if can't, the biggger, the better) for all
> disks, and test with numjobs=64. The result should be one disk with high
> iops, and the rest very low. So I think it's better to ensure the max
> tags a disk can get in this case.

Just wondering why you try to set 128 via sysfs for all disks? If you do that,
you should know the potential result given the whole tags queue depth is just
128.


Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ