lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50105347.3050308@suse.de>
Date:	Thu, 26 Jul 2012 01:42:55 +0530
From:	Ankit Jain <jankit@...e.de>
To:	Dave Chinner <david@...morbit.com>
Cc:	Al Viro <viro@...iv.linux.org.uk>, bcrl@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-aio@...ck.org,
	linux-kernel@...r.kernel.org, Jan Kara <jack@...e.cz>
Subject: Re: [RFC][PATCH] Make io_submit non-blocking

On 07/25/2012 04:01 AM, Dave Chinner wrote:
> On Tue, Jul 24, 2012 at 05:11:05PM +0530, Ankit Jain wrote:
[snip]
>> **Unpatched**
>> read : io=102120KB, bw=618740 B/s, iops=151 , runt=169006msec
>> slat (usec): min=275 , max=87560 , avg=6571.88, stdev=2799.57
> 
> Hmmm, I had to check the numbers twice - that's only 600KB/s.
> 
> Perhaps you need to test on something more than a single piece of
> spinning rust. Optimising AIO for SSD rates (say 100k 4k write IOPS)
> is probably more relevant to the majority of AIO users....

I tested with a ramdisk to "simulate" a fast disk and had attached the
results. I'll try to get hold of a SSD and then test with that also.
Meanwhile, I ran the tests again, with ext3/ext4/xfs/btrfs and not sure
what I had screwed up when I did that previous test, but the numbers
look proper (as I was getting in my earlier testing) now:

For disk, I tested on a separate partition formatted with the fs, and
then run fio on it, with 1 job. Here "Old" is 3.5-rc7 (918227b).

------ disk -------
====== ext3 ======
                                       submit latencies(usec)
      	B/w       iops   runtime     min  max     avg   std dev

ext3-read :
Old:  453352 B/s  110   231050msec    3  283048  170.28 5183.28
New:  451298 B/s  110   232555msec    0     444    8.18    7.95
ext3-write:
Old:  454309 B/s  110   231050msec    2  304614  232.72 6549.82
New:  450488 B/s  109   232555msec    0     233    7.94    7.23

====== ext4 ======
ext4-read :
Old:  459824 B/s  112   228635msec    2  260051  121.40 3569.78
New:  422700 B/s  103   247097msec    0     165    8.18    7.87
ext4-write:
Old:  457424 B/s  111   228635msec    3  312958  166.75 4616.58
New:  426015 B/s  104   247097msec    0     169    8.00    8.08

====== xfs ======
xfs-read :
Old:  467330 B/s  114   224516msec    3     272   46.45   25.35
New:  417049 B/s  101   252262msec    0     165    7.84    7.87
xfs-write:
Old:  466746 B/s  113   224516msec    3     265   52.52   28.13
New:  414289 B/s  101   252262msec    0     143    7.58    7.66

====== btrfs ======
btrfs-read :
Old:  1027.1KB/s  256    99918msec    5   84457   62.15  527.24
New:  1054.5KB/s  263    97542msec    0     121    9.72    7.05
btrfs-write:
Old:  1021.8KB/s  255    99918msec    10 139473   84.96  899.99
New:  1045.2KB/s  261    97542msec    0     248    9.55    7.02

These are the figures with a ramdisk:

------ ramdisk -------
====== ext3 ======
                                         submit latencies (usec)
        B/w       iops       runtime    min  max   avg  std dev

ext3-read :
Old:  430312KB/s  107577     2026msec    1  7072   3.85 15.17
New:  491251KB/s  122812     1772msec    0    22   0.39  0.52
ext3-write:
Old:  428918KB/s  107229     2026msec    2    61   3.46  0.85
New:  491142KB/s  122785     1772msec    0    62   0.43  0.55

====== ext4 ======
ext4-read :
Old:  466132KB/s  116532     1869msec    2   133   3.66  1.04
New:  542337KB/s  135584     1607msec    0    67   0.40  0.54
ext4-write:
Old:  465276KB/s  116318     1869msec    2   127   2.96  0.94
New:  540923KB/s  135230     1607msec    0    73   0.43  0.55

====== xfs ======
xfs-read :
Old:  485556KB/s  121389     1794msec    2   160   3.58  1.22
New:  581477KB/s  145369     1495msec    0    19   0.39  0.51
xfs-write:
Old:  484789KB/s  121197     1794msec    1    87   2.68  0.99
New:  582938KB/s  145734     1495msec    0    56   0.43  0.55

====== btrfs ======
I had trouble with btrfs on a ramdisk though, it complained about space
during preallocation. This was with a 4gig ramdisk and fio set to write
1700mb file, so these numbers are from that partial run. Btrfs ran fine
on a regular disk though.

btrfs-read :
Old:  107519KB/s  26882     2579msec    13  1492  17.03  9.23
New:  109878KB/s  27469     4665msec    0     29   0.45  0.55
btrfs-write:
Old:  108047KB/s  27020     2579msec    1  64963  17.21 823.88
New:  109413KB/s  27357     4665msec    0     32   0.48   0.56

Also, I dropped caches ("echo 3 > /proc/vm/sys/drop_cache") and sync'ed
before running each test. All the fio log files are attached.

Any suggestions on how I might test this better, other than the SSD
suggestion ofcourse.

[snip]
> Also, you added a memory allocation in the io submit code. Worse
> case latency will still be effectively undefined - what happens to
> latencies if you generate memory pressure while the test is running?

I'll try to fix this.

-- 
Ankit Jain
SUSE Labs

Download attachment "fio-logs.tgz" of type "application/x-compressed-tar" (11198 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ