lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 1 Mar 2012 21:45:31 -0500
From:	Ted Ts'o <tytso@....edu>
To:	Xupeng Yun <xupeng@...eng.me>
Cc:	Ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: Bad performance of ext4 with kernel 3.0.17

Hmm, it sounds like we're hitting some kind of scaling problem.  How
many CPU's/cores do you have on your server?  And it would be
interesting to try varying the --numjobs parameter and see how the
various file systems behave with 1, 2, 4, 8, and 16 threads.

The other thing that's worth checking is to try using filefrag -v on
the test file after the benchmark has finished, just to make sure the
file layout is sane.  It should be, but I just want to double check...

          	       	      	 	- Ted

On Fri, Mar 02, 2012 at 08:50:55AM +0800, Xupeng Yun wrote:
> On Fri, Mar 2, 2012 at 03:47, Ted Ts'o <tytso@....edu> wrote:
> > Two things I'd try:
> >
> > #1) If this is a freshly created file system, the kernel may be
> > initializing the inode table in the background, and this could be
> > interfering with your benchmark workload.  To address this, you can
> > either (a) add the mount option noinititable, (b) add the mke2fs
> > option "-E lazy_itable_init=0" --- but this will cause the mke2fs to
> > take a lot longer, or (c) mount the file system and wait until
> > "dumpe2fs /dev/md3 | tail" shows that the last block group has the
> > ITABLE_ZEROED flag set.  For benchmarking purposes on a scratch
> > workload, option (a) above is the fast thing to do.
> >
> 
> Thank you Ted, I followed this and got the same result (read IOPS ~950
> / write IOPS ~100)
> 
> > #2) It could be that the file system is choosing blocks farther away
> > from the beginning of the disk, which is slower, whereas the fio on
> > the raw disk will use the blocks closest to the beginning of the disk,
> > which are the fastest one.  You could try creating the file system so
> > it is only 10GB, and then try running fio on that small, truncated
> > file system, and see if that makes a difference.
> 
> I created LVM on top of the RAID10 device, and then created a smaller LV(20GB),
> after that I took benchmarks against the very same LV with different
> filesystems, the
> results are interesting:
> 
> xfs (read IOPS ~1700 / write IOPS ~200)
> ext4 (read IOPS ~950 / write IOPS ~100)
> ext3( read IOPS ~900 / write IOPS ~100)
> reisferfs (read IOPS ~930 / write IOPS ~100)
> btrfs (read IOPS ~1200 / write IOPS ~120)
> 
> I got very bad performance from XFS
> (http://www.spinics.net/lists/xfs/msg08688.html) about
> two months ago, which was caused by known bugs of XFS, then I tried
> ext4 on some of
> my servers, it works very well until I got a new server set up with soft RAID10.
> 
> What should I learn to understand what's happening? any suggestion is
> appreciated.
> 
> -- 
> Xupeng Yun
> http://about.me/xupeng
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ