lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 01 Feb 2013 22:33:21 +1100
From:	Bron Gondwana <brong@...tmail.fm>
To:	Robert Mueller <robm@...tmail.fm>, "Theodore Ts'o" <tytso@....edu>
Cc:	Eric Sandeen <sandeen@...hat.com>,
	Linux Ext4 mailing list <linux-ext4@...r.kernel.org>
Subject: Re: fallocate creating fragmented files

On Thu, Jan 31, 2013, at 09:51 AM, Robert Mueller wrote:
> Also, while e4defrag will try and defrag a file (or multiple files), is
> there any way to actually defrag the entire filesystem to try and move
> files around more intelligently to make larger extents? I guess running
> e4defrag on the entire filesystem multiple times would help, but it
> still would not move small files that are breaking up large extents. Is
> there any way to do that?

In particular, the way that Cyrus works seems entirely suboptimal for ext4.
The index and database files receive very small appends (108 byte per message
for the index, and probably just a few hundred per write for most of the the
twoskip databases), and they happen pretty much randomly to one of tens of
thousands of these little files, depending which mailbox received the message.

This causes allocation patterns which result in tons of tiny holes over time
as files get deleted, so the filesystem is kind of evenly scattered all over.

Here's the same experiment on a "fresh" filesystem.  I created this by taking
a server down, copying the entire contents of the SSD to a spare piece of rust,
reformatting, and copying it all back (cp -a).  So the data on there is the
same, just the allocations have changed.

[brong@...p15 conf]$ fallocate -l 20m testfile
[brong@...p15 conf]$ filefrag -v testfile
Filesystem type is: ef53
File size of testfile is 20971520 (20480 blocks, blocksize 1024)
 ext logical physical expected length flags
   0       0 22913025            8182 unwritten
   1    8182 22921217 22921207   8182 unwritten
   2   16364 22929409 22929399   4116 unwritten,eof
testfile: 3 extents found

As you can see, that's slightly more optimal.  I'm assuming 8182 is the
maximum number of contiguous blocks before you hit an assigned metadata
location and have to skip over it.

So in other words, our 2 year old filesystems are shot.  We need to do
this sort of "defrag" on a semi-regular basis.  Joy.

Bron.
-- 
  Bron Gondwana
  brong@...tmail.fm

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ