lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17995.47339.651121.146258@notabene.brown>
Date:	Thu, 17 May 2007 12:07:39 +1000
From:	Neil Brown <neilb@...e.de>
To:	koan <koan00@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [2.6.20.11] File system corruption with degraded md/RAID-5 device

On Wednesday May 16, koan00@...il.com wrote:
> 
> Here is what I am doing to test:
> 
> fdisk /dev/sda1 and /dev/sb1 to type fd/Linux raid auto
> mdadm --create /dev/md1 -c 128 -l 5 -n 3 /dev/sda1 /dev/sdb1 missing
> mke2fs -j -b 4096 -R stride=32 /dev/md1
> e2fsck -f /dev/md1
> ---------------------
> Result: FAILS - fsck errors (Example: "Inode 3930855 is in use, but
> has dtime set.")

Very odd.  I cannot reproduce this, but then my drives are somewhat
smaller than yours (though I'm not sure how that could be
significant).

Can you try a raid0 across 2 drives?  That would be more like the
raid5 layout than raid1.

My guess is some subtle hardware problem,  as I would be very
surprised in the raid5 code is causing this.  Maybe run memtest86? 

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ