lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Mar 2009 16:12:11 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Theodore Tso <tytso@....edu>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Daniel Phillips <phillips@...nq.net>,
	linux-fsdevel@...r.kernel.org, tux3@...3.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [Tux3] Tux3 report: Tux3 Git tree available

On Sun, Mar 15, 2009 at 05:44:26PM -0400, Theodore Tso wrote:
> On Sun, Mar 15, 2009 at 02:45:04PM +1100, Nick Piggin wrote:
> > > As it happens, Tux3 also physically allocates each _physical_ metadata
> > > block (i.e., what is currently called buffer cache) at the time it is
> > > dirtied.  I don't know if this is the best thing to do, but it is
> > > interesting that you do the same thing.  I also don't know if I want to
> > > trust a library to get this right, before having completely proved out
> > > the idea in a non-trival filesystem.  But good luck with that!  It
> > 
> > I'm not sure why it would be a big problem. fsblock isn't allocating
> > the block itself of course, it just asks the filesystem to. It's
> > trivial to do for fsblock.
> 
> So the really unfortunate thing about allocating the block as soon as
> the page is dirty is that it spikes out delayed allocation.  By
> delaying the physical allocation of the logical->physical mapping as
> long as possible, the filesystem can select the best possible physical
> location.

This is no different to the way delayed allocation with bufferheads
works. Both XFS and ext4 set the buffer_delay flag instead of
allocating up front so that later on in ->writepages we can do
optimal delayed allocation. AFAICT fsblock works the same way....

> XFS, for example, keeps a btree of free regions indexed by
> size so that it can select the perfect location for a newly written
> file which is 24k or 56k long.

Ah, no. It's far more complex than that. To begin with, XFS has
*two* freespace trees per allocation group - one indexed by extent size,
the other by extent starting block.

XFS looks for an exact or nearby extent start block match that is
big enough in the by-block tree. If it can't find a nearby match,
then it looks up a size match in the by-size tree. i.e. the
fundamental allocation assumption is that locality of data placement
matters far more than filling holes in the freespace trees.....

> In addition, XFS uses delayed allocation to avoid the problem of
> uninitalized data becoming visible in the event of a crash.

No it doesn't. Delayed allocation minimises the problem but doesn't
prevent it.  It has been known for years (since before I joined SGI
in 2002) that there is a theoretical timing gap in XFS where the
allocation transaction can commit and a crash occur before data hits
the disk hence exposing stale data.

The reality is that no-one has ever reported exposing stale data in
this scenario, and there has been plenty of effort expended trying
to trigger it. Hence it has remained in the realm of a theoretical
problem....

> If
> fsblock immediately allocates the physical block, then either the
> unitialized data might become available on a system crash (which
> is a security problem), or XFS is going to have to force all newly
> written data blocks to disk before a commit.  If that sounds
> familiar it's what ext3's data=ordered mode does, and it's what is
> responsible for the Firefox 3.0 fsync performance problem.

If this was to occur, the obvious solution to this problem is to
allocate unwritten extents and do conversion after data I/O
completion. That would result in correct metadata/data ordering in
all cases with only a small performance impact and without
introducing ext3-sync-the-world-like issues...

Ted, I appreciate you telling the world over and over again how bad
XFS is and what you think needs to be done to fix it. Truth is, this
would have been a much better email had you written about it from an
ext4 perspective. That way it wouldn't have been full of errors or
sound like a kid caught with his hand in the cookie jar:

"It's not my fault! I was only copying XFS! He did it first!"

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ