[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1LOiN8-0001Cj-91@closure.thunk.org>
Date: Sun, 18 Jan 2009 19:52:10 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: linux-ext4@...r.kernel.org
Subject: The meaning of data=ordered as it relates to delayed allocation
An Ubuntu user recently complained about a large number of recently
updated files which were zero-length after an crash. I started looking
more closely at that, and it's because we have an interesting
interpretation of data=ordered. It applies for blocks which are already
allocated, but not for blocks which haven't been allocated yet. This
can be surprising for users; and indeed, for many workloads where you
aren't using berk_db some other database, all of the files written will
be newly created files (or files which are getting rewritten after
opening with O_TRUNC), so there won't be any difference between
data=writeback and data=ordered.
So I wonder if we should either:
(a) make data=ordered force block allocation and writeback --- which
should just be a matter of disabling the
redirty_page_for_writepage() code path in ext4_da_writepage()
(b) add a new mount option, call it data=delalloc-ordered which is (a)
(c) change the default mount option to be data=writeback
(d) Do (b) and make it the default
(e) Keep things the way they are
Thoughts, comments? My personal favorite is (b). This allows users
who want something that works functionally much more like ext3 to get
that, while giving us the current speed advantages of a more aggressive
delayed allocation.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists