lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Dec 2021 15:38:07 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Dexuan Cui <decui@...rosoft.com>
Cc:     Jens Axboe <axboe@...nel.dk>, 'Christoph Hellwig' <hch@....de>,
        "'linux-block@...r.kernel.org'" <linux-block@...r.kernel.org>,
        Long Li <longli@...rosoft.com>,
        "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
        "'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: Re: Random high CPU utilization in blk-mq with the none scheduler

On Mon, Dec 13, 2021 at 04:20:49AM +0000, Dexuan Cui wrote:
> > From: Ming Lei <ming.lei@...hat.com>
> >  ...
> > Can you provide the following blk-mq debugfs log?
> > 
> > (cd /sys/kernel/debug/block/dm-N && find . -type f -exec grep -aH . {} \;)
> > 
> > (cd /sys/kernel/debug/block/sdN && find . -type f -exec grep -aH . {} \;)
> > 
> > And it is enough to just collect log from one dm-mpath & one underlying iscsi
> > disk,
> > so we can understand basic blk-mq setting, such as nr_hw_queues, queue
> > depths, ...
> > 
> > Thanks,
> > Ming
> 
> Attached. I collected the logs for all the dm-* and sd* devices against
> v5.16-rc4 with the 3 commits reverted:
> b22809092c70 ("block: replace always false argument with 'false'")
> ff1552232b36 ("blk-mq: don't issue request directly in case that current is to be blocked")
> dc5fc361d891 ("block: attempt direct issue of plug list")
> 
> v5.16-rc4 does not reproduce the issue, so I'm pretty sure b22809092c70 is the
> patch that resolves the excessive CPU usage.

>From the log:

1) dm-mpath:
- queue depth: 2048
- busy: 848, and 62 of them are in sw queue, so run queue is often
  caused
- nr_hw_queues: 1
- dm-2 is in use, and dm-1/dm-3 is idle
- dm-2's dispatch busy is 8, that should be the reason why excessive CPU
usage is observed when flushing plug list without commit dc5fc361d891 in
which hctx->dispatch_busy is just bypassed

2) iscsi
- dispatch_busy is 0
- nr_hw_queues: 1
- queue depth: 113
- busy=~33, active_queues is 3, so each LUN/iscsi host is saturated
- 23 active LUNs, 23 * 33 = 759 in-flight commands

The high CPU utilization may be caused by:

1) big queue depth of dm mpath, the situation may be improved much if it
is reduced to 1024 or 800. The max allowed inflight commands from iscsi
hosts can be figured out, if dm's queue depth is much more than this number,
the extra commands need to dispatch, and run queue can be scheduled
immediately, so high CPU utilization is caused.

2) single hw queue, so contention should be big, which should be avoided
in big machine, nvme-tcp might be better than iscsi here

3) iscsi io latency is a bit big

Even CPU utilization is reduced by commit dc5fc361d891, io performance
can't be good too with v5.16-rc, I guess.


Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ