lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 11 Dec 2021 03:44:37 +0000
From:   Dexuan Cui <decui@...rosoft.com>
To:     Jens Axboe <axboe@...nel.dk>,
        "'ming.lei@...hat.com'" <ming.lei@...hat.com>,
        'Christoph Hellwig' <hch@....de>,
        "'linux-block@...r.kernel.org'" <linux-block@...r.kernel.org>
CC:     Long Li <longli@...rosoft.com>,
        "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
        "'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: RE: Random high CPU utilization in blk-mq with the none scheduler

> From: Jens Axboe <axboe@...nel.dk>
> 
> Just out of curiosity, can you do:
> 
> # perf record -a -g -- sleep 3
> 
> when you see the excessive CPU usage, then attach the output of
> 
> # perf report -g
> 
> to a reply?

I ran the commands against the 5.15.7 kernel and generated a
771-MB file. After I ran 'bzip2 -9', the file size is still 22 MB, which
I guess is too big to be shared via the list. I'll try to put it somewhere
and send you a link.
 
> How confident are you in your bisect result?
> 
> --
> Jens Axboe

I'm pretty confident:
1) I can't reproduce the issue with v5.16-rc4 even if I run the test 
10 times. Typically the issue repros every time.

2) If I revert the commit

dc5fc361d891 ("block: attempt direct issue of plug list")

and the related patches

ff1552232b36 ("blk-mq: don't issue request directly in case that current is to be blocked")
b22809092c70 ("block: replace always false argument with 'false'")

from v5.16-rc4, I'm able to repro the issue immediately.

Thanks,
Dexuan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ