[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.63.0607311348150.14631@qynat.qvtvafvgr.pbz>
Date: Mon, 31 Jul 2006 13:53:09 -0700 (PDT)
From: David Lang <dlang@...italinsight.com>
To: David Masover <ninja@...phack.com>
cc: Clay Barnes <clay.barnes@...il.com>,
Rudy Zijlstra <rudy@...ons.demon.nl>,
Adrian Ulrich <reiser4@...nkenlights.ch>,
vonbrand@....utfsm.cl, ipso@...ppymail.ca, reiser@...esys.com,
lkml@...productions.com, jeff@...zik.org, tytso@....edu,
linux-kernel@...r.kernel.org, reiserfs-list@...esys.com
Subject: Re: the " 'official' point of view" expressed by kernelnewbies.orgregarding
reiser4 inclusion
On Mon, 31 Jul 2006, David Masover wrote:
> Probably. By the time a few KB of metadata are corrupted, I'm reaching for
> my backup. I don't care what filesystem it is or how easy it is to edit the
> on-disk structures.
>
> This isn't to say that having robust on-disk structures isn't a good thing.
> I have no idea how Reiser4 will hold up either way. But ultimately, what you
> want is the journaling (so power failure / crashes still leave you in an OK
> state), backups (so when blocks go bad, you don't care), and performance (so
> you can spend less money on hardware and more money on backup hardware).
please read the discussion that took place at the filesystem summit a couple
weeks ago (available on lwn.net)
one of the things that they pointed out there is that as disks get larger the
ratio of bad spots per Gig of storage is remaining about the same. As is the
rate of failures per Gig of storage.
As a result of this the idea of only running on perfect disks that never have
any failures is becomeing significantly less realistic, instead you need to take
measures to survive in the face of minor corruption (including robust
filesystems, raid, etc)
David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists