lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 02 Jun 2020 17:22:39 +0300
From:   Konstantin Kharlamov <hi-angel@...dex.ru>
To:     linux-ext4@...r.kernel.org
Subject: Re: Changing a workload results in performance drop

So, FTR, I found on kernelnewbies that in linux 5.7 ext4 migrated to
iomap. Out of curiousity I rerun the tests on 5.7. The problem is still
reproducible.

On Fri, 2020-04-24 at 17:56 +0300, Konstantin Kharlamov wrote:
> * SSDs are used in testing, so random access is not a concern. But I
> tried the
>    "steps to reproduce" with raw block device, and IOPS always holds
> 9k for me.
> * "Direct" IO is used to bypass file-system cache.
> * The issue is way less visible on XFS, so it looks specific to file
> systems.
> * The biggest difference I've seen is on 70% reads/30% writes
> workload. But for
>    simplicity in "steps to reproduce" I'm using 100% write.
> * it seems over time (perhaps a day) performance gets improved, so
> for best
>    results when testing that you need to re-create ext4 anew.
> * in "steps to reproduce" I grep fio stdout. That suppresses
> interactive
>    output. Interactive output may be interesting though, I've often
> seen workload
>    drops to 600-700 IOPS while average was 5-6k
> * Original problem I worked with 
> https://github.com/openzfs/zfs/issues/10231
> 
> # Steps to reproduce (in terms of terminal commands)
> 
>      $ cat fio_jobfile
>      [job-section]
>      name=temp-fio
>      bs=8k
>      ioengine=libaio
>      rw=randrw
>      rwmixread=0
>      rwmixwrite=100
>      filename=/mnt/test/file1
>      iodepth=1
>      numjobs=1
>      group_reporting
>      time_based
>      runtime=1m
>      direct=1
>      filesize=4G
>      $ mkfs.ext4 /dev/sdw1
>      $ mount /dev/sdw1 /mnt/test
>      $ truncate -s 100G /mnt/test/file1
>      $ fio fio_jobfile | grep -i IOPS
>        write: IOPS=12.5k, BW=97.0MiB/s (103MB/s)(5879MiB/60001msec)
>         iops        : min=10966, max=14730, avg=12524.20,
> stdev=1240.27, samples=119
>      $ sed -i 's/4G/100G/' fio_jobfile
>      $ fio fio_jobfile | grep -i IOPS
>        write: IOPS=5880, BW=45.9MiB/s (48.2MB/s)(2756MiB/60001msec)
>         iops        : min= 4084, max= 6976, avg=5879.31,
> stdev=567.58, samples=119
> 
> ## Expected
> 
> Performance should be more or less the same
> 
> ## Actual
> 
> The second test is twice as slow
> 
> # Versions
> 
> * Kernel version: 5.6.2-050602-generic
> 
> It seems however that the problem is present at least in 4.19 and
> 5.4. as well, so not a regression.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ