lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Mar 2010 12:14:47 -0400
From:	Greg Freemyer <greg.freemyer@...il.com>
To:	Akira Fujita <a-fujita@...jp.nec.com>
Cc:	Theodore Tso <tytso@....edu>,
	ext4 development <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH] e2fsprogs: Fix the overflow in e4defrag with 2GB over 
	file

On Tue, Mar 30, 2010 at 2:35 AM, Akira Fujita <a-fujita@...jp.nec.com> wrote:
> e2fsprogs: Fix the overflow in e4defrag with 2GB over file
>
> From: Akira Fujita <a-fujita@...jp.nec.com>
>
> In e4defrag, we use locally defined posix_fallocate interface.
> And its "offset" and "len" are defined as off_t (long) type,
> their upper limit is 2GB -1 byte.
> Thus if we run e4defrag to the file whose size is 2GB over,
> the overflow occurs at calling fallocate syscall.
>
> To fix this issue, I add new define _FILE_OFFSET_BITS 64 to use
> 64bit offset for filesystem related syscalls in e4defrag.c.
> (Also this patch includes open mode fix which has been
> released but not been merged e2fsprogs git tree yet.
> http://lists.openwall.net/linux-ext4/2010/01/19/3)
>
> Reported-by: David Calinski <david@...lrecall.com>
> Signed-off-by: Akira Fujita <a-fujita@...jp.nec.com>
> ---

Akira,

I haven't looked at the4defrag code since Sept, but does it still
defrag large files in one huge effort.

Thus a 100GB sparse file being used to hold VM virtual disk is
defrag'ed all at once.

And worse, when data is written to one of the holes in the sparse
file, the entire file has to be defragged again?

If so, I think that is a broken design, and e4defrag should simply
skip these large files for now.

The proper fix being to defrag a "donor extent" at a time.

ie. attempt to allocate a full 128 MB extent for the donor file.  If
successful, replace the first partial extent in the target file with
the donor extent.  Repeat until done.

That way you have a few advantages:

1) You never need more than one free extent to work with.

2) Once you defrag the beginning of a file, you never have to defrag
it again.  Thus when a sparse file gets new blocks/extents allocated,
only the areas of the files that are truly fragmented have to be
defragmented.

The one negative I can see is that the extents may not be localized
well with this approach.  Is that a major concern?  Is there a way to
try to localize the new donor extent request near to the extent it
will be following logically?

For the last issue, I think you've been working on a mballoc patch
that would give e4defrag the ability to control mballoc on a per inode
basis.  If not, the ohsm project has a patch for something similar.  I
haven't worked with the ohsm mballoc patch, so I'm not sure how it
works.

Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists