[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6bffcb0e0609140229r59691de5i58d2d81f839d744e@mail.gmail.com>
Date: Thu, 14 Sep 2006 11:29:51 +0200
From: "Michal Piotrowski" <michal.k.k.piotrowski@...il.com>
To: "David Chinner" <dgc@....com>
Cc: linux-kernel@...r.kernel.org, xfs-masters@....sgi.com
Subject: Re: [xfs-masters] Re: 2.6.18-rc6-mm2
On 14/09/06, David Chinner <dgc@....com> wrote:
> On Thu, Sep 14, 2006 at 10:50:42AM +0200, Michal Piotrowski wrote:
> > On 14/09/06, Michal Piotrowski <michal.k.k.piotrowski@...il.com> wrote:
> > >David Chinner wrote:
> > >> On Wed, Sep 13, 2006 at 11:43:32AM +0200, Michal Piotrowski wrote:
> > >>> On 13/09/06, David Chinner <dgc@....com> wrote:
> > >>>> I've booted 2.6.18-rc6-mm2 and mounted and unmounted several xfs
> > >>>> filesystems. I'm currently running xfsqa on it, and I haven't seen
> > >>>> any failures on unmount yet.
> > >>>>
> > >>>> That test case would be really handy, Michal.
> > >>> http://www.stardust.webpages.pl/files/mm/2.6.18-rc6-mm2/test_mount_fs.sh
> > >>>
> > >>> ls -hs /home/fs-farm/
> > >>> total 3.6G
> > >>> 513M ext2.img 513M ext4.img 513M reiser3.img 513M xfs.img
> > >>> 513M ext3.img 513M jfs.img 513M reiser4.img
> > >>
> > >> Ok, so you're using loopback and mounting one of each filesystem, then
> > >> unmounting them in the same order. I have mounted and unmounted an
> > >> XFS filesystem in isolation in exactly the same way you have been, but
> > >> I haven't seen any failures.
> > >>
> > >> Can you rerun the test with just XFS in your script and see if you
> > >> see any failures? If you don't see any failures, can you add each
> > >> filesystem back in one at a time until you see failures again?
> > >
> > >
> > >I still get an oops (with xfs only). Maybe it's file system image problem.
> > >
> > >xfs_info /mnt/fs-farm/xfs/
> > >meta-data=/dev/loop1 isize=256 agcount=8, agsize=16384 blks
> > > = sectsz=512
> > >data = bsize=4096 blocks=131072, imaxpct=25
> > > = sunit=0 swidth=0 blks, unwritten=1
> > >naming =version 2 bsize=4096
> > >log =internal bsize=4096 blocks=1200, version=1
> > > = sectsz=512 sunit=0 blks
> > >realtime =none extsz=65536 blocks=0, rtextents=0
> >
> > Can I send to you this fs image? It's only 246KB bz2 file.
>
> I've downloaded it, and I don't see a panic on that fs at all.
> I've got it sitting in a tight loop mounting and unmounting the
> image you sent me, and nothing has gone wrong. I don't think it's
> a corrupted filesystem problem - it seems more like a memory corruption
> problem to me.
I'm checking memory from time to time with memtest86.
>
> What arch are you running on and what compiler are you using?
gcc -v
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--enable-checking=release --with-system-zlib --enable-__cxa_atexit
--disable-libunwind-exceptions --enable-libgcj-multifile
--enable-languages=c,c++,objc,obj-c++,java,fortran,ada
--enable-java-awt=gtk --disable-dssi
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
--with-cpu=generic --host=i386-redhat-linux
Thread model: posix
gcc version 4.1.1 20060525 (Red Hat 4.1.1-1)
I'll build system with gcc 3.4
> Can you try 2.6.18-rc6 and see if it panics like this on your
> machine?
2.6.18-rc7 works fine.
> there is little difference in xfs between -rc6 and -rc6-mm2
> so it would be good to know if this is a problem isolated to
> the -mm tree or not....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> Principal Engineer
> SGI Australian Software Group
>
Regards,
Michal
--
Michal K. K. Piotrowski
LTG - Linux Testers Group
(http://www.stardust.webpages.pl/ltg/)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists