[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <64d833020705170608rbab5e56t605fa64c8e6ecb3c@mail.gmail.com>
Date: Thu, 17 May 2007 09:08:08 -0400
From: koan <koan00@...il.com>
To: "Neil Brown" <neilb@...e.de>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [2.6.20.11] File system corruption with degraded md/RAID-5 device
Neil,
I ran the drives in RAID-0 as you suggested and noticed the same problem.
I also ran memtest86 over night but there were no errors.
I did notice this SIL3114 card doesn't play nice with the other
promise cards (or vice versa). I had to swap them arroudn in the PCI
slots for the system to boot at all. I'll try removing the additional
hardware and see if that resolves the issue.
Thanks, Jesse
On 5/16/07, Neil Brown <neilb@...e.de> wrote:
> On Wednesday May 16, koan00@...il.com wrote:
> >
> > Here is what I am doing to test:
> >
> > fdisk /dev/sda1 and /dev/sb1 to type fd/Linux raid auto
> > mdadm --create /dev/md1 -c 128 -l 5 -n 3 /dev/sda1 /dev/sdb1 missing
> > mke2fs -j -b 4096 -R stride=32 /dev/md1
> > e2fsck -f /dev/md1
> > ---------------------
> > Result: FAILS - fsck errors (Example: "Inode 3930855 is in use, but
> > has dtime set.")
>
> Very odd. I cannot reproduce this, but then my drives are somewhat
> smaller than yours (though I'm not sure how that could be
> significant).
>
> Can you try a raid0 across 2 drives? That would be more like the
> raid5 layout than raid1.
>
> My guess is some subtle hardware problem, as I would be very
> surprised in the raid5 code is causing this. Maybe run memtest86?
>
> NeilBrown
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists