[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903271452521.3994@localhost.localdomain>
Date: Fri, 27 Mar 2009 15:01:40 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Andreas T.Auer" <andreas.t.auer_lkml_73537@...us.ath.cx>
cc: Theodore Tso <tytso@....edu>, Alan Cox <alan@...rguk.ukuu.org.uk>,
Matthew Garrett <mjg59@...f.ucam.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29
On Fri, 27 Mar 2009, Andreas T.Auer wrote:
>
> > The fundamental idea here is not all files need to be forced to disk
> > on close. Not all files need fsync(), or even fbarrier().
>
> An fbarrier() on close() would reflect the thinking of a lot of
> developers.
It also happens to be what pretty much all network filesystems end up
implementing.
That said, there's a reason many people prefer local filesystems to even
high-performance NFS - latency (especially for metadata which even modern
versions of NFS cannot cache effectively) just sucks when you have to go
over the network. It pretty much doesn't matter _how_ fast your network or
server is.
One thing that might make sense is to make "close()" start background
writeout for that file (modulo issues like laptop mode) with low priority.
No, it obviously doesn't guarantee any kind of filesystem coherency, but
it _does_ mean that the window for the bad cases goes from potentially 30
seconds down to fractions of seconds. That's likely quite a bit of
improvement in practice.
IOW, no "hard barriers", but simply more of a "even in the absense of
fsync we simply aim for the user to have to be _really_ unlucky to ever
hit any bad cases".
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists