lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jan 2013 09:51:22 +1100
From:	Robert Mueller <robm@...tmail.fm>
To:	"Theodore Ts'o" <tytso@....edu>
Cc:	Eric Sandeen <sandeen@...hat.com>,
	Bron Gondwana <brong@...tmail.fm>, linux-ext4@...r.kernel.org
Subject: Re: fallocate creating fragmented files


> The most likely reason is that it depends on transaction boundaries.
> After a block has been released, we can't reuse it until after the
> jbd2 transaction which contains the deletion of the inode has
> committed.  So even after you've deleted the file, we can't reuse the
> blocks right away.  The other thing which will influence the block
> allocation is which block group the last allocation was for that
> particular file.  So if blocks become available after a commit
> completes, if we've started allocating in another block group, we
> won't go back to the initial block group.

Ok, makes sense.

However it still doesn't answer the question about why the allocator is
choosing smaller extents over larger ones nearby.

For instance, looking at filefrag -v for testfile and testfile2 again.
Remember, these were created immediately one after another.

testfile:
...
 398   18841 44779580 44779043     26 unwritten
 399   18867 44780335 44779606     26 unwritten
 400   18893 44780658 44780361     26 unwritten

testfile2:
...
  13     814 44792388 44788982    189 unwritten
  14    1003 44792578 44792577    157 unwritten

Those look quite near each other. So when testfile1 was being allocated,
there were some bigger extents right nearby that were ignored, and ended
up being used when the next file testfile2 was allocated. Why?

Also, while e4defrag will try and defrag a file (or multiple files), is
there any way to actually defrag the entire filesystem to try and move
files around more intelligently to make larger extents? I guess running
e4defrag on the entire filesystem multiple times would help, but it
still would not move small files that are breaking up large extents. Is
there any way to do that?

Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ