lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <64d833020705161326m51922f63n31ef5b528a514ed7@mail.gmail.com>
Date:	Wed, 16 May 2007 16:26:29 -0400
From:	koan <koan00@...il.com>
To:	linux-kernel@...r.kernel.org
Cc:	neilb@...e.de
Subject: [2.6.20.11] File system corruption with degraded md/RAID-5 device

Hello,

I am having trouble creating a new md device when using RAID-5 and the
array is created with a missing disk (degraded mode). The creation
procedure works fine and there are no errors in syslog.

After the creation, /prod/mdstat shows:

Personalities : [raid1] [raid6] [raid5] [raid4] [faulty]
md1 : active raid5 sdb1[1] sda1[0]
      610469632 blocks level 5, 128k chunk, algorithm 2 [3/2] [UU_]

This appears normal. However when I attempt to format it with ext3 and
use it, the filesystem reports errors immediately after creation. To
help rule out hardware issues, I attempted to use both disks
separately and together in a RAID-1 device. Both of these setups work.

Hardware is:

Athlon XP+ 2.0Ghz
1GB PC2100
Nforce2 Ultra motherboard (Shuttle AN-35N Ultra)
Silicon Image 3114 based add in SATAI card (4 ports)
2x Seagate 7200.10 320GB (ST3320620AS)

Software is:

vanilla Linux 2.6.20.11
mdadm - v2.6.1 - 22nd February 2007
e2fsck 1.39 (29-May-2006)
mke2fs 1.39 (29-May-2006)


Here is what I am doing to test:

fdisk /dev/sda1 and /dev/sb1 to type fd/Linux raid auto
mdadm --create /dev/md1 -c 128 -l 5 -n 3 /dev/sda1 /dev/sdb1 missing
mke2fs -j -b 4096 -R stride=32 /dev/md1
e2fsck -f /dev/md1
---------------------
Result: FAILS - fsck errors (Example: "Inode 3930855 is in use, but
has dtime set.")


fdisk /dev/sda1 to type 83/linux
mke2fs -j -b 4096 -R stride=32 /dev/sda1
e2fsck -f /dev/sda1
---------------------
Result: OK


fdisk /dev/sdb1 to type 83/linux
mke2fs -j -b 4096 /dev/sdb1
e2fsck -f /dev/sdb1
---------------------
Result: OK


fdisk /dev/sda1 and /dev/sb1 to type fd/Linux raid auto
mdadm --create /dev/md1 -c 128 -l 1 -n 2 /dev/sda1 /dev/sdb1
mke2fs -j -b 4096 -R stride=32 /dev/md1
e2fsck -f /dev/md1
---------------------
Result: OK


I would assume that this is an issue with RAID-5/degraded mode
operation? I am unsure of how to solve the issue or debug any further.
Can anyone advise? (Warning: I'm not a kernel hacker!).  I am not on
the list, so please CC me with any replies.

Thanks, Jesse

View attachment "dmesg.txt" of type "text/plain" (15685 bytes)

View attachment "lspci.txt" of type "text/plain" (13238 bytes)

Download attachment "config" of type "application/octet-stream" (33085 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ