lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 31 Oct 2010 22:47:57 +0000
From:	Hugo Mills <hugo-lkml@...fax.org.uk>
To:	Felipe Contreras <felipe.contreras@...il.com>
Cc:	cwillu <cwillu@...llu.com>,
	Calvin Walton <calvin.walton@...il.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-btrfs@...r.kernel.org, Chris Mason <chris.mason@...cle.com>
Subject: Re: Horrible btrfs performance due to fragmentation

On Mon, Nov 01, 2010 at 12:36:58AM +0200, Felipe Contreras wrote:
> On Mon, Nov 1, 2010 at 12:25 AM, cwillu <cwillu@...llu.com> wrote:
> > btrfs fi defrag isn't recursive.  "btrfs filesystem defrag /home" will
> > defragment the space used to store the folder, without touching the
> > space used to store files in that folder.
> 
> Yes, that came up on the IRC, but:
> 
> 1) It doesn't make sense: "btrfs filesystem" doesn't allow a fileystem
> as argument? Why would anyone want it to be _non_ recursive?

   You missed the subsequent discussion on IRC about the interaction
of COW with defrag. Essentially, if you've got two files that are COW
copies of each other, and one has had something written to it since,
it's *impossible* for both files to be defragmented, without making a
full copy of both:

Start with a file (A, etc are data blocks on the disk):

file1 = ABCDEF

Cow copy it:

file1 = ABCDEF
file2 = ABCDEF

Now write to one of them:

file1 = ABCDEF
file2 = ABCDxF

   So, either file1 is contiguous, and file2 is fragmented (with the
block x somewhere else on disk), or file2 is contiguous, and file1 is
fragmented (with E somewhere else on disk). In fact, we've determined
by experiment that when you defrag a file that's sharing blocks with
another one, the file gets copied in its entirety, thus separating the
blocks of the file and its COW duplicate.

> 2) The filesystem should not degrade performance so horribly no matter
> how long the it has been used. Even git has automatic garbage
> collection.

   Since, I believe, btrfs uses COW very heavily internally for
ensuring consistency, you can end up with fragmenting files and
directories very easily. You probably need some kind of scrubber that
goes looking for non-COW files that are fragmented, and defrags them
in the background.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- "No!  My collection of rare, incurable diseases! Violated!" ---   

Download attachment "signature.asc" of type "application/pgp-signature" (191 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ