lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Jul 2009 17:14:44 -0400
From:	Andreas Dilger <adilger@....com>
To:	Stephan Kulow <coolo@...e.de>
Cc:	Theodore Tso <tytso@....edu>, linux-ext4@...r.kernel.org
Subject: Re: file allocation problem

On Jul 17, 2009  20:02 +0200, Stephan Kulow wrote:
> On Friday 17 July 2009 16:26:28 Theodore Tso wrote:
> > And this isn't necessarily going to help; if 16 block groups around
> > (2**4) for the flex_bg for the /usr/bin directory are all badly
> > fragmented, then when you create new files in /usr/bin, it will still
> > be fragmented.
>
> Yeah, but even the file in /tmp/nd got 3 extents. my file is 1142 blocks
> and my mb_groups says 2**9 is the highest possible value. So I guess I will
> indeed try to create the file system from scratch to test the allocator for 
> real.

The defrag code needs to become smarter, so that it finds small files 
in the middle of freespace and migrates those to fit into a small gap.
That will allow larger files to be defragged once there is large chunks
of free space.

> > allocator tries to keep files aligned on power of two boundaries,
> > which tends to help this a lot (although this means that dumpe2fs -h
> > will show a bunch of holes that makes the free space look more
> > fragmented than it really is), but the ext3 allocator doesn't have any
> > such smarts on it.
> But there is nothing packing the blocks if the groups get full, so these
> holes will always cause fragmentation once the file system gets full, right?
 

Well, this isn't quite correct.  The mballoc code only tries to allocate
"large" files on power-of-two boundaries, where large is 64kB by default,
but is tunable in /proc.  For smaller files it tries to pack them together
into the same block, or into gaps that are exactly the size of the file.

> So I guess online defragmentation first needs to pretend doing an online 
> resize so it can use the gained free size. Now I have something to test.. :)

Yes, that would give you some good free space at the end of the filesystem.
Then find the largest files in the filesystem, migrate them there, then
defrag the smaller files.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ