lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Jun 2008 22:14:39 -0500
From:	Eric Sandeen <>
To:	Valerie Clement <>
CC:	ext4 development <>,
	Mingming Cao <>,
	"Jose R. Santos" <>
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Valerie Clement wrote:
>> Hi all,
>> Since a couple of weeks, I did batches of tests to have some performance
>> numbers for the new ext4 features like uninit_groups, flex_bg or
>> journal_checksum on a 5TB filesystem.
>> I tried to test allmost all combinations of mkfs and mount options, but
>> I put only a subset of them in the result tables, the most significant
>> for me.
>> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
>> hangs and crashes occuring randomly outside ext4, sometimes in the slab
>> code or in the scsi driver eg., and which were not reproductible.
>> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>> The first results and the test description are available here:
> One other question on the tests; am I reading correctly that ext3 used
> "data=writeback" but ext4 used the default data=ordered mode?

I was interested in the results, especially since ext3 seemed to pretty
well match ext4 for throughput, although the cpu utilization differed.

I re-ran the same ffsb profiles on an 8G, 4-way opteron box, connected
to a "Vendor: WINSYS   Model: SF2372" 2T hardware raid array with 512MB
cache, connected via fibrechannel.

Reads go pretty fast:

# dd if=/dev/sdc bs=16M count=512 iflag=direct of=/dev/null
8589934592 bytes (8.6 GB) copied, 23.2257 seconds, 370 MB/s

I got some different numbers....

This was with e2fsprogs-1.39 for ext3, e2fsprogs-1.40.10 for ext4, and
xfsprogs-2.9.8 for xfs.

I used defaults except; data=writeback for ext[34] and the nobarrier
option for xfs.  ext3 was made with 128 byte inodes, ext4 with 256-byte
(new default).  XFS used stock mkfs.  I formatted the entire block
device /dev/sdc.

For the large file write test:

	MB/s	CPU %
ext3	140	 90.7
ext4	182	 50.2
xfs	222	145.0

And for the small random readwrite test:

	trans/s	CPU %
ext3	 9830	 12.2
ext4	11996	 18.1
xfs	13863	 23.5

Not sure what the difference is ...

If you have your tests scripted up I'd be interested to run all the
variations on this hardware as well, as it seems to show more throughput


To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists