lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 16 Apr 2020 23:34:29 -0400
From:   Rick Warner <rick@...roway.com>
To:     linux-kernel@...r.kernel.org
Subject: Re: slow write performance with software RAID on nvme storage

Additional testing with fio has shown near theoretical write speeds if I 
test direct to the /dev/md device instead of using either xfs or ext4.

I've tested different queue settings without significant changes.

Is it possible to get a single XFS or ext4 filesystem performing with 
 >10GB/s write speeds?

On 2019-03-29 16:55, Rick Warner wrote:
> Hi All,
>
> We've been testing a 24 drive NVME software RAID and getting far lower
> write speeds than expected.  The drives are connected with PLX chips
> such that 12 drives are on 1 x16 connection and the other 12 drives use
> another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drives
> are Intel DC P4500 1TB.
>
> We're testing with fio using 8 jobs.
>
> Using all defaults with RAID0 I can only get 4 or 5 GB/s write speeds
> but can hit ~24GB/s read speeds.  The drives have over 1GB/s write speed
> each, so we should be able to hit at least 20GB/s write speed.
>
> Testing with RAID6 and defaults got significantly lower (down around
> 1.5GB/s).  Using a 64k chunk and increasing the group_thread_cnt
> increased the results to ~4GB/s.
>
> dmesg shows the RAID parity calc speed being ~40GB/s:
> [    4.215386] raid6: using algorithm avx512x2 gen() 41397 MB/s
>
>
> I've played around with filesystem queue choices and tuning but haven't
> seen any significant improvements.
>
> What is the bottleneck here? If it's not known, what should I do to
> determine it?
>
> I've done a variety of other tests with this system and am happy to
> elaborate further if any other information is needed.
>
> Thanks,
> Rick Warner

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ