lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <bug-203943-13602-hLCBXnHREX@https.bugzilla.kernel.org/> Date: Fri, 21 Jun 2019 18:29:07 +0000 From: bugzilla-daemon@...zilla.kernel.org To: linux-ext4@...r.kernel.org Subject: [Bug 203943] ext4 corruption after RAID6 degraded; e2fsck skips block checks and fails https://bugzilla.kernel.org/show_bug.cgi?id=203943 --- Comment #4 from Theodore Tso (tytso@....edu) --- That sounds *very* clearly as a RAID bug. If RAID6 is returning garbage to the file system in degraded mode, there is nothing the file system can do. What worries me is if the RAID6 system was returning garbage when *reading* who knows how it was trashing the file system image when the ext4 kernel code was *writing* to it? In any case, there's very little we as ext4 developers can do here to help, except give you some advice for how to recover your file system. What I'd suggest that you do is to use the debugfs tool to sanity check the inode. If the inode number reported by e2fsck was 123456, you can look at it by using the debugfs command: "stat <123456>". If the timestamps, user id and group id numbers, etc, look insane, you can speed up the recovery time by using the command "clri <123456>", which zeros out the inode. -- You are receiving this mail because: You are watching the assignee of the bug.
Powered by blists - more mailing lists