[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49CD32E1.2020906@ursus.ath.cx>
Date: Fri, 27 Mar 2009 21:11:13 +0100
From: "Andreas T.Auer" <andreas.t.auer_lkml_73537@...us.ath.cx>
To: Theodore Tso <tytso@....edu>
CC: Alan Cox <alan@...rguk.ukuu.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matthew Garrett <mjg59@...f.ucam.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
On 2009-03-27 20:32 Theodore Tso wrote:
> We could potentially make close() imply fbarrier(), but there are
> plenty of times when that might not be such a great idea. If we do
> that, we're back to requiring synchronous data writes for all files on
> close()
fbarrier() on close() would only mean, that the data shouldn't be
written after the metadata and new metadata shouldn't be written
_before_ old metadata, so you can also delay the committing of the
"dirty" metadata until the real data are written. You don't need
synchronous data writes necessarily.
> The fundamental idea here is not all files need to be forced to disk
> on close. Not all files need fsync(), or even fbarrier().
An fbarrier() on close() would reflect the thinking of a lot of
developers. You might call them stupid and incompetent, but they surely
are the majority. When closing A before creating B, they don't expect
seeing B without a completed A, even though they might expect that
neither A nor B may be written yet, if the system crashes.
If you have smart developers, you might give them something new, so they
could speed things up with some extra code, e.g. when they create data,
which may be restored by other means, but the default behavior of
automatic fbarrier() on close() would be better.
Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists