[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070427050742.GC20286@nifty>
Date: Thu, 26 Apr 2007 22:07:43 -0700
From: Valerie Henson <val_henson@...ux.intel.com>
To: Jan Kara <jack@...e.cz>
Cc: David Chinner <dgc@....com>, Amit Gud <gud@....edu>,
Nikita Danilov <nikita@...sterfs.com>,
David Lang <david.lang@...italinsight.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
riel@...riel.com, zab@...bo.net, arjan@...radead.org,
suparna@...ibm.com, brandon@...p.org, karunasagark@...il.com
Subject: Re: [RFC][PATCH] ChunkFS: fs fission for faster fsck
On Thu, Apr 26, 2007 at 10:47:38AM +0200, Jan Kara wrote:
> Do I get it right that you just have in each cnode a pointer to the
> previous & next cnode? But then if two consecutive cnodes get corrupted,
> you have no way to connect the chain, do you? If each cnode contained
> some unique identifier of the file and a number identifying position of
> cnode, then there would be at least some way (through expensive) to
> link them together correctly...
You're right, it's easy to add a little more redundancy that would
make it possible to recover from two consecutive nodes being
corrupted. Keeping a parent inode id in each continuation inode is
definitely a smart thing to do.
Some minor side notes: Continuation inodes aren't really in any
defined order - if you look at Jeff's ping-pong chunk allocation
example, you'll see that the data in each continuation inode won't be
in linearly increasing order. Also, while the current implementation
is a simple doubly-linked list, this may not be the best solution
long-term. What's important is that each continuation inode have a
back pointer to the parent and that there is some structure for
quickly looking up the continuation inode for a given file offset.
Suggestions for data structures that work well in this situation are
welcome. :)
-VAL
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists