lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 02 Aug 2010 11:12:23 -0500
From:	Eric Sandeen <sandeen@...hat.com>
To:	Kay Diederichs <Kay.Diederichs@...-konstanz.de>
CC:	Dave Chinner <david@...morbit.com>,
	linux <linux-kernel@...r.kernel.org>,
	Ext4 Developers List <linux-ext4@...r.kernel.org>,
	Karsten Schaefer <karsten.schaefer@...-konstanz.de>
Subject: Re: ext4 performance regression 2.6.27-stable versus 2.6.32 and later

On 08/02/2010 09:52 AM, Kay Diederichs wrote:
> Dave,
> 
> as you suggested, we reverted "ext4: Avoid group preallocation for
> closed files" and this indeed fixes a big part of the problem: after
> booting the NFS server we get
> 
> NFS-Server: turn5 2.6.32.16p i686
> NFS-Client: turn10 2.6.18-194.8.1.el5 x86_64
> 
> exported directory on the nfs-server:
> /dev/md5 /mnt/md5 ext4
> rw,seclabel,noatime,barrier=1,stripe=512,data=writeback 0 0
> 
>  48 seconds for preparations
>  28 seconds to rsync 100 frames with 597M from nfs directory
>  57 seconds to rsync 100 frames with 595M to nfs directory
>  70 seconds to untar 24353 kernel files with 323M to nfs directory
>  57 seconds to rsync 24353 kernel files with 323M from nfs directory
> 133 seconds to run xds_par in nfs directory
> 425 seconds to run the script

Interesting, I had found this commit to be a problem for small files
which are constantly created & deleted; the commit had the effect of
packing the newly created files in the first free space that could be
found, rather than walking down the disk leaving potentially fragmented
freespace behind (see seekwatcher graph attached).  Reverting the patch
sped things up for this test, but left the filesystem freespace in bad
shape.

But you seem to see one of the largest effects in here:

261 seconds to rsync 100 frames with 595M to nfs directory
vs
 57 seconds to rsync 100 frames with 595M to nfs directory

with the patch reverted making things go faster.  So you are doing 100
6MB writes to the server, correct?  Is the filesystem mkfs'd fresh
before each test, or is it aged?  If not mkfs'd, is it at least
completely empty prior to the test, or does data remain on it?  I'm just
wondering if fragmented freespace is contributing to this behavior as
well.  If there is fragmented freespace, then with the patch I think the
allocator is more likely to hunt around for small discontiguous chunks
of free sapce, rather than going further out in the disk looking for a
large area to allocate from.

It might be interesting to use seekwatcher on the server to visualize
the allocation/IO patterns for the test running just this far?

-Eric

Download attachment "rhel6_ext4_comparison.png" of type "image/png" (113533 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ