lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Mar 2022 20:27:05 -0600
From:   Keith Busch <kbusch@...nel.org>
To:     Michael Marod <michael@...haelmarod.com>
Cc:     Christoph Hellwig <hch@...radead.org>,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org
Subject: Re: NVME performance regression in Linux 5.x due to lack of block
 level IO queueing

On Thu, Mar 31, 2022 at 11:22:03PM +0000, Michael Marod wrote:
> # /usr/local/bin/fio -name=randrw -filename=/opt/foo -direct=1 -iodepth=1 -thread -rw=randrw -ioengine=psync -bs=4k -size=10G -numjobs=16 -group_reporting=1 -runtime=120
> 
> // Ubuntu 16.04 / Linux 4.4.0:
> Run status group 0 (all jobs):
>    READ: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=6537MiB (6854MB), run=120002-120002msec
>   WRITE: bw=54.5MiB/s (57.2MB/s), 54.5MiB/s-54.5MiB/s (57.2MB/s-57.2MB/s), io=6544MiB (6862MB), run=120002-120002msec
> 
> // Ubuntu 18.04 / Linux 5.4.0:
> Run status group 0 (all jobs):
>    READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=2821MiB (2959MB), run=120002-120002msec
>   WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=2819MiB (2955MB), run=120002-120002msec
> 
> // Ubuntu 18.04 / Linux 5.17:
> Run status group 0 (all jobs):
>    READ: bw=244MiB/s (255MB/s), 244MiB/s-244MiB/s (255MB/s-255MB/s), io=28.6GiB (30.7GB), run=120001-120001msec
>   WRITE: bw=244MiB/s (256MB/s), 244MiB/s-244MiB/s (256MB/s-256MB/s), io=28.6GiB (30.7GB), run=120001-120001msec

Thanks for the info. I don't know of anything block or nvme specific that might
explain an order of magnitude perf difference.

Could you try the same test without the filesytems? You mentioned using mdraid,
so try '--filename=/dev/mdX'. If that also shows similiar performance
difference, try using one of your nvme member drives directly, like
'--filename=/dev/nvme1n1'. That should isolate which subsystem is contributing
to the difference.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ