lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120129201543.GJ29272@MAIL.13thfloor.at>
Date:	Sun, 29 Jan 2012 21:15:43 +0100
From:	Herbert Poetzl <herbert@...hfloor.at>
To:	Wu Fengguang <wfg@...ux.intel.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>, Tejun Heo <tj@...nel.org>
Subject: Re: Bad SSD performance with recent kernels

On Mon, Jan 30, 2012 at 12:10:58AM +0800, Wu Fengguang wrote:
> On Sun, Jan 29, 2012 at 02:13:51PM +0100, Eric Dumazet wrote:
>> Le dimanche 29 janvier 2012 à 19:16 +0800, Wu Fengguang a écrit :

>>> Note that as long as buffered read(2) is used, it makes almost no
>>> difference (well, at least for now) to do "dd bs=128k" or "dd bs=2MB":
>>> the 128kb readahead size will be used underneath to submit read IO.

>> Hmm...

>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
>> 32768+0 enregistrements lus
>> 32768+0 enregistrements écrits
>> 4294967296 octets (4,3 GB) copiés, 20,7718 s, 207 MB/s


>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
>> 2048+0 enregistrements lus
>> 2048+0 enregistrements écrits
>> 4294967296 octets (4,3 GB) copiés, 27,7824 s, 155 MB/s

> Interesting. Here are my test results:

> root@...-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
> 32768+0 records in
> 32768+0 records out
> 4294967296 bytes (4.3 GB) copied, 19.0121 s, 226 MB/s
> root@...-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
> 2048+0 records in
> 2048+0 records out
> 4294967296 bytes (4.3 GB) copied, 19.0214 s, 226 MB/s

> Maybe the /dev/sda performance bug on your machine is sensitive to timing?

here are some more confusing results from tests with dd and bonnie++, 
this time I focused on partition vs. loop vs. linear dm (of same partition)

kernel	  -------------- read --------------  -- write ---  all
	  -------- dd --------  -------- bonnie++ --------------
	  [MB/s]  real    %CPU  [MB/s]  %CPU  [MB/s]  %CPU  %CPU
direct
2.6.38.8  262.91   81.90  28.7	 72.30   6.0  248.53  52.0  15.9
2.6.39.4   36.09  595.17   3.1	 70.62   6.0  250.25  53.0  16.3
3.0.18     50.47  425.65   4.1	 70.00   5.0  251.70  44.0  13.9
3.1.10     27.28  787.32   2.0	 75.65   5.0  251.96  45.0  13.3
3.2.2      27.11  792.28   2.0	 76.89   6.0  250.38  44.0  13.3

loop
2.6.38.8  242.89   88.50  21.5	246.58  15.0  240.92  53.0  14.4
2.6.39.4  241.06   89.19  21.5	238.51  15.0  257.59  57.0  14.8
3.0.18	  261.44   82.23  18.8	256.66  15.0  255.17  48.0  12.6
3.1.10	  253.93   84.64  18.1	107.66   7.0  156.51  28.0  10.6
3.2.2	  262.58   81.82  19.8	110.54   7.0  212.01  40.0  11.6

linear
2.6.38.8  262.57   82.00  36.8	 72.46   6.0  243.25  53.0  16.5
2.6.39.4   25.45  843.93   2.3	 70.70   6.0  248.05  54.0  16.6
3.0.18	   55.45  387.43   5.6	 69.72   6.0  249.42  45.0  14.3
3.1.10	   36.62  586.50   3.3	 74.74   6.0  249.99  46.0  13.4
3.2.2	   28.28  759.26   2.3	 74.20   6.0  248.73  46.0  13.6


it seems that dd performance when using a loop device is unaffected
and even improves with the kernel version, the filesystem performance
OTOH degrades after 3.1 ...

in general, filesystem read performance is bad on everything but
a loop device ... judging from the results I'd conclude that there
are at least two different issues 

tests and test results are attached and can be found here:
http://vserver.13thfloor.at/Stuff/SSD/

I plan to do some more tests on the filesystem with -b and -D
tonight, please let me know if you want to see specific output
and/or have any tests I should run with each kernel ...

HTH,
Herbert

> Thanks,
> Fengguang

Download attachment "SSD.txz" of type "application/octet-stream" (56488 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ