lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <bug-203943-13602-TNwOaykekc@https.bugzilla.kernel.org/> Date: Fri, 21 Jun 2019 14:52:01 +0000 From: bugzilla-daemon@...zilla.kernel.org To: linux-ext4@...r.kernel.org Subject: [Bug 203943] ext4 corruption after RAID6 degraded; e2fsck skips block checks and fails https://bugzilla.kernel.org/show_bug.cgi?id=203943 --- Comment #3 from Yann Ormanns (yann@...anns.net) --- Andreas & Ted, thank you for your replies. (In reply to Andreas Dilger from comment #1) > This seems like a RAID problem and not an ext4 problem. The RAID array > shouldn't be returning random garbage if one of the drives is unavailable. > Maybe it is not doing data parity verification on reads, so that it is > blindly returning bad blocks from the failed drive rather than > reconstructing valid data from parity if the drive does not fail completely? How can I check that? At least running "checkarray" did not find anything new or helpful. (In reply to Theodore Tso from comment #2) > Did you resync the disks *before* you ran e2fsck? Or only afterwards? 1. my RAID6 got degraded and ext4 errors showed up 2. I ran e2fsck, it consumed all memory and showed only "Inode %$i block %$b conflicts with critical metadata, skipping block checks." 3. I replaced the faulty disk and resynced the RAID6 4. e2fsck was able to clean the filesystem 5. I simulated a drive fault (so my RAID6 had n+1 working disks left) 6. the ext4 FS got corrupted again 7. although the RAID is clean again, e2fsck is not able to clean the FS (like in step 2) -- You are receiving this mail because: You are watching the assignee of the bug.
Powered by blists - more mailing lists