[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150320014708.GA3425@thunk.org>
Date: Thu, 19 Mar 2015 21:47:08 -0400
From: Theodore Ts'o <tytso@....edu>
To: Andreas Dilger <adilger@...ger.ca>
Cc: Allison Henderson <achender@...ux.vnet.ibm.com>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"jane@...ibm.com" <jane@...ibm.com>,
"marcel.dufour@...ibm.com" <marcel.dufour@...ibm.com>
Subject: Re: fs corruption recovery
On Wed, Mar 18, 2015 at 06:59:52PM -0600, Andreas Dilger wrote:
> I think that running a 17TB filesystem on ext3 is a recipe for disaster. They should use ext4 for anything larger than 16TB.
It's not *possible* to have a 17TB file system with ext3. Something
must be very wrong there. 16TB is the maximum you can have before you
end up overflowing a 32-bit block number. Unless this is a PowerPC
with a 16K block size or some such?
If e2fsck is segfaulting, then I would certainly try getting the
latest version of e2fsprogs, just in case the problem isn't just that
it's running out of memory. Also if recovering customer data is the
most important thing, the first thing they should do is a make image
copy of the file system, since it's possible that incorrect use of
e2fsck, or an old/buggy version of e2fsck could make things work.
In particular, if they are seeing errors with multply claimed inodes,
it's likely that part of the inode table was written to the wrong
place, and sometimes a skilled human being can get more data than
simply using e2fsck -y and praying. At the end of the day the
question is how much is the customer data work and how much effort is
the customer / IBM willing to invest in trying to get every last bit
of data back?
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists