[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200910280644.n9S6i6P7007417@demeter.kernel.org>
Date: Wed, 28 Oct 2009 06:44:06 GMT
From: bugzilla-daemon@...zilla.kernel.org
To: linux-ext4@...r.kernel.org
Subject: [Bug 14354] Bad corruption with 2.6.32-rc1 and upwards
http://bugzilla.kernel.org/show_bug.cgi?id=14354
--- Comment #138 from Alexey Fisher <bug-track@...her-privat.net> 2009-10-28 06:44:01 ---
(In reply to comment #137)
> This is not a valid test. Mounting with "-o noload" will discard all of the
> transaction information in the journal, and virtually guarantee the filesystem
> is inconsistent. It will be no better than ext2, which requires a full e2fsck
> run after each crash. This is NOT a valid reproducer for this problem.
Grr... i know my english is not good, but is it really hard to understand why i
mean? "noload" is not used to reproduce crash! To crash i use default options
of distribution.
In this bug i do not trust distribution to run fsck, so i do it manually. I
start in to initrd, so root is not mounted! Then i mount it manually to be sure
it is readonly. Normally i mounted on this stage with option "-o ro". The
result of thees was - we newer so "jurnal corruption", because the kernel
silently "repaired" it.
Now i use "-o ro,noload" to mount root and run fsck (not to reproduce crush).
And now i can see, journal is not corrupted after normal crush. If yournall is
corrupt all fs is corrupt too.
Now is the question: do we use journal to recover fs? if we use broken journal
how this recovery will look like? Do we get this "multiply claimed blocks"
because we get wrong information from journal and this is the reason, why some
times files which was written long time before are corrupt too?
--
Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists