[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c49b0ed0607311643r61570665ga4d8a70beaeb17f@mail.gmail.com>
Date: Mon, 31 Jul 2006 16:43:11 -0700
From: "Nate Diller" <nate.diller@...il.com>
To: "Jeff V. Merkey" <jmerkey@...fmountaingroup.com>
Cc: "Gregory Maxwell" <gmaxwell@...il.com>,
"Alan Cox" <alan@...rguk.ukuu.org.uk>,
"Clay Barnes" <clay.barnes@...il.com>,
"Rudy Zijlstra" <rudy@...ons.demon.nl>,
"Adrian Ulrich" <reiser4@...nkenlights.ch>, vonbrand@....utfsm.cl,
ipso@...ppymail.ca, reiser@...esys.com, lkml@...productions.com,
jeff@...zik.org, tytso@....edu, linux-kernel@...r.kernel.org,
reiserfs-list@...esys.com
Subject: Re: the " 'official' point of view" expressed by kernelnewbies.org regarding reiser4 inclusion
On 7/31/06, Jeff V. Merkey <jmerkey@...fmountaingroup.com> wrote:
> Nate Diller wrote:
>
> > On 7/31/06, Jeff V. Merkey <jmerkey@...fmountaingroup.com> wrote:
> >
> >> Gregory Maxwell wrote:
> >>
> >> > On 7/31/06, Alan Cox <alan@...rguk.ukuu.org.uk> wrote:
> >> >
> >> >> Its well accepted that reiserfs3 has some robustness problems in the
> >> >> face of physical media errors. The structure of the file system
> >> and the
> >> >> tree basis make it very hard to avoid such problems. XFS appears
> >> to have
> >> >> managed to achieve both robustness and better data structures.
> >> >>
> >> >> How reiser4 compares I've no idea.
> >> >
> >> >
> >> > Citation?
> >> >
> >> > I ask because your clam differs from the only detailed research that
> >> > I'm aware of on the subject[1]. In figure 2 of the iron filesystems
> >> > paper that Ext3 is show to ignore a great number of data-loss inducing
> >> > failure conditions that Reiser3 detects an panics under.
> >> >
> >> > Are you sure that you aren't commenting on cases where Reiser3 alerts
> >> > the user to a critical data condition (via a panic) which leads to a
> >> > trouble report while ext3 ignores the problem which suppresses the
> >> > trouble report from the user?
> >> >
> >> > *1) http://www.cs.wisc.edu/adsl/Publications/iron-sosp05.pdf
> >>
> >> Hi Gregory, Wikimedia Foundation and LKML?
> >>
> >> How's Wikimania going. :-)
> >>
> >> What he says is correct. I have seen some serious issues with reiserfs
> >> in terms of stability and
> >> data corruption. Resier is however FASTER, but the statement is has
> >> robustness issues is accurate.
> >> I was using reiserfs but we opted to make EXT3 the default for Solera
> >> appliances, even when using Suse 10
> >> due to issues I have seen with data corruption and hard hangs on RAID 0
> >> read/write sector errors. I have
> >> stopped using it for local drives and based everything on EXT3. Not to
> >> say it won't get there eventually, but
> >> file systems have to endure a lot of time in the field and deployment
> >> befor they are ready for prime time.
> >>
> >> The Wikimedia appliances use Wolf Mountain, and I've tested it for about
> >> 4 months with few problems, but
> >> I only use it for hosting the Cherokee Langauge Wikipedia. It's
> >> performance is several magnitudes better
> >> than either EXT3 or ReiserFS. Despite this, for vertical wiki servers,
> >> its ok to go out with, folks can specifiy
> >> whether they want appliances with EXT3, Reiser, or WMFS, but iit's a
> >> long way from being "cooked"
> >> completely, though it does scale to 1 exabyte FS images.
> >
> >
> > i've seen you mention the Wolf Mountain FS in other emails, but google
> > isn't telling me a lot about it. Do you have a whitepaper? are there
> > any published benchmark results? what sort of workloads do you
> > benchmark?
> >
> > NATE
> >
> Wikipedia is the app for now. I have not done any benchmarks on the FS
> side, just the capture side, and its been transferred to
> another entity. I have no idea what they are naming it to, but I expect
> you may hear about it soon. One of the incarnations
> of it is Solera's DSFS which can be reviewed here:
>
> www.soleranetworks.com
so this is a single stream, write only? ...
> I can sustain 850 MB/S throughput from user space with it -- about 5 x
> any other FS. On some hardware, I've broken
> the 1.25 GB/S (gigabyte/second) windows with it.
and you're saying it scales to much higher multi-spindle
single-machine throughput. cool.
i'd love to see a whitepaper, or failing that, have an off-list
discussion of your approach and the various kernel limitations you ran
up against in testing. i don't suppose they invited you to the Kernel
Summit to talk about it, heh.
NATE
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists