lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR21MB12706DCD5ED9FC7AB3EE2EEABF759@BYAPR21MB1270.namprd21.prod.outlook.com>
Date:   Tue, 14 Dec 2021 00:31:23 +0000
From:   Dexuan Cui <decui@...rosoft.com>
To:     Ming Lei <ming.lei@...hat.com>
CC:     Jens Axboe <axboe@...nel.dk>, 'Christoph Hellwig' <hch@....de>,
        "'linux-block@...r.kernel.org'" <linux-block@...r.kernel.org>,
        Long Li <longli@...rosoft.com>,
        "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
        "'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: RE: Random high CPU utilization in blk-mq with the none scheduler

> From: Ming Lei <ming.lei@...hat.com>
> Sent: Sunday, December 12, 2021 11:38 PM

Ming, thanks so much for the detailed analysis!

> From the log:
> 
> 1) dm-mpath:
> - queue depth: 2048
> - busy: 848, and 62 of them are in sw queue, so run queue is often
>   caused
> - nr_hw_queues: 1
> - dm-2 is in use, and dm-1/dm-3 is idle
> - dm-2's dispatch busy is 8, that should be the reason why excessive CPU
> usage is observed when flushing plug list without commit dc5fc361d891 in
> which hctx->dispatch_busy is just bypassed
> 
> 2) iscsi
> - dispatch_busy is 0
> - nr_hw_queues: 1
> - queue depth: 113
> - busy=~33, active_queues is 3, so each LUN/iscsi host is saturated
> - 23 active LUNs, 23 * 33 = 759 in-flight commands
> 
> The high CPU utilization may be caused by:
> 
> 1) big queue depth of dm mpath, the situation may be improved much if it
> is reduced to 1024 or 800. The max allowed inflight commands from iscsi
> hosts can be figured out, if dm's queue depth is much more than this number,
> the extra commands need to dispatch, and run queue can be scheduled
> immediately, so high CPU utilization is caused.

I think you're correct:
with dm_mod.dm_mq_queue_depth=256, the max CPU utilization is 8%.
with dm_mod.dm_mq_queue_depth=400, the max CPU utilization is 12%. 
with dm_mod.dm_mq_queue_depth=800, the max CPU utilization is 88%.

The performance with queue_depth=800 is poor.
The performance with queue_depth=400 is good.
The performance with queue_depth=256 is also good, and there is only a 
small drop comared with the 400 case.

> 2) single hw queue, so contention should be big, which should be avoided
> in big machine, nvme-tcp might be better than iscsi here
> 
> 3) iscsi io latency is a bit big
> 
> Even CPU utilization is reduced by commit dc5fc361d891, io performance
> can't be good too with v5.16-rc, I guess.
> 
> Thanks,
> Ming

Actually the I/O performance of v5.16-rc4 (commit dc5fc361d891 is included)
is good -- it's about the same as the case where v5.16-rc4 + reverting
dc5fc361d891 + dm_mod.dm_mq_queue_depth=400 (or 256).

Thanks,
Dexuan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ