lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a6373c6-817f-6ead-2a71-7c83c03406be@microway.com>
Date:   Fri, 29 Mar 2019 16:55:44 -0400
From:   Rick Warner <rick@...roway.com>
To:     linux-kernel@...r.kernel.org
Subject: slow write performance with software RAID on nvme storage

Hi All,

We've been testing a 24 drive NVME software RAID and getting far lower
write speeds than expected.  The drives are connected with PLX chips
such that 12 drives are on 1 x16 connection and the other 12 drives use
another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drives
are Intel DC P4500 1TB.

We're testing with fio using 8 jobs.

Using all defaults with RAID0 I can only get 4 or 5 GB/s write speeds
but can hit ~24GB/s read speeds.  The drives have over 1GB/s write speed
each, so we should be able to hit at least 20GB/s write speed.

Testing with RAID6 and defaults got significantly lower (down around
1.5GB/s).  Using a 64k chunk and increasing the group_thread_cnt
increased the results to ~4GB/s.

dmesg shows the RAID parity calc speed being ~40GB/s:
[    4.215386] raid6: using algorithm avx512x2 gen() 41397 MB/s


I've played around with filesystem queue choices and tuning but haven't
seen any significant improvements.

What is the bottleneck here? If it's not known, what should I do to
determine it?

I've done a variety of other tests with this system and am happy to
elaborate further if any other information is needed.

Thanks,
Rick Warner

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ