lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47EC199F.4030102@tmr.com>
Date:	Thu, 27 Mar 2008 18:03:11 -0400
From:	Bill Davidsen <davidsen@....com>
To:	Emmanuel Florac <eflorac@...ellique.com>
CC:	Bart Van Assche <bart.vanassche@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: RAID-1 performance under 2.4 and 2.6

Emmanuel Florac wrote:
> Le Wed, 26 Mar 2008 12:15:57 +0100
> "Bart Van Assche" <bart.vanassche@...il.com> écrivait:
> 
>> You are welcome to post the numbers you obtained with dd for direct
>> I/O on a RAID-1 setup for 2.4 versus 2.6 kernel.
> 
> Here we go (tested on a slightly slower hardware : Athlon64 3000+,
> nVidia chipset) . Actually, direct IO result is identical. However, the
> significant number for the end user in this case is the NFS thruput.
> 


> 2.4 kernel (2.4.32), async write
> --------------------------------
> root@0[root]# ./dd if=/dev/zero of=/mnt/raid/testdd01 bs=1M count=1024 
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 13.407 seconds, 80.1 MB/s
> 
> 2.4 kernel (2.4.32), async write thru NFS mount
> --------------------------------
> emmanuel[/mnt/temp]$ dd if=/dev/zero of=./testdd01 bs=1M count=1024 
> 1024+0 enregistrements lus
> 1024+0 enregistrements écrits
> 1073741824 bytes (1,1 GB) copied, 15,5176 s, 69,2 MB/s
> 
> 2.4 kernel (2.4.32), async read
> --------------------------------
> root@0[root]# ./dd if=/mnt/raid/testdd01 of=/dev/null bs=1M
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 15.752 seconds, 68.2 MB/s
> 
> 2.4 kernel (2.4.32), sync write
> --------------------------------
> root@0[root]# ./dd if=/dev/zero of=/mnt/raid/testdd01 bs=1M count=1024 \
> oflag=direct,dsync 
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 21.7874 seconds, 49.3 MB/s
> 
> 2.6 kernel (2.6.22.18), async write
> --------------------------------
> root@0[root]# ./dd if=/dev/zero of=/mnt/raid/testdd02 bs=1M
> count=1024 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 17.1347 seconds, 62.7 MB/s
> 
> 2.6 kernel (2.6.22.18), async write thru NFS mount
> --------------------------------
> emmanuel[/mnt/temp]$ dd if=/dev/zero of=./testdd02 bs=1M count=1024
> 1024+0 enregistrements lus
> 1024+0 enregistrements écrits
> 1073741824 bytes (1,1 GB) copied, 21,3618 s, 50,3 MB/s
> 
> 2.6 kernel (2.6.22.18), async read
> --------------------------------
> root@0[root]# ./dd if=/mnt/raid/testdd02 of=/dev/null bs=1M
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 15.7599 seconds, 68.1 MB/s
> 
> 2.6 kernel (2.6.22.18), sync write
> --------------------------------
> root@0[root]# ./dd if=/dev/zero of=/mnt/raid/testdd02 bs=1M count=1024 \
> oflag=direct,dsync 
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 21.7011 seconds, 49.5 MB/s
> 
The time you usually want to measure is time to get all data to another 
drive. In that case fdatasync allows typical buffering while waiting at 
the end of the copy until all bytes are on the destination platter. That 
doesn't change the speed, just makes the numbers more stable. That's the 
one I use, since most simple applications just use write() to send data. 
This may or may not provide numbers more representative of your application.

-- 
Bill Davidsen <davidsen@....com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ