[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110802121601.GA13061@localhost>
Date: Tue, 2 Aug 2011 20:16:01 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Dave Chinner <david@...morbit.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: xfstests 073 regression
On Tue, Aug 02, 2011 at 08:04:45PM +0800, Dave Chinner wrote:
> On Tue, Aug 02, 2011 at 07:44:28PM +0800, Wu Fengguang wrote:
> > On Tue, Aug 02, 2011 at 12:52:42AM +0800, Christoph Hellwig wrote:
> > > wb_check_background_flush is indeed what we're hitting.
> >
> > That means s_umount is NOT held by another queued writeback work.
>
> Right. We already kind of knew that was ocurring because there's
> a remount,ro going on.
Yes, and even better if it can be confirmed with a full sysrq-t trace.
> >
> > > See the trace output using a patch inspired by Curt's below:
> > >
> > > # tracer: nop
> > > #
> > > # TASK-PID CPU# TIMESTAMP FUNCTION
> > > # | | | | |
> > > <...>-4279 [000] 113.034052: writeback_grab_super_failed: bdi 7:0: sb_dev 0:0 nr_pages=9223372036854775807 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=wb_check_background_flush
> > > <...>-4279 [000] 113.034052: writeback_grab_super_failed: bdi 7:0: sb_dev 0:0 nr_pages=9223372036854775807 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=wb_check_background_flush
> > > <...>-4279 [000] 113.034052: writeback_grab_super_failed: bdi 7:0: sb_dev 0:0 nr_pages=9223372036854775807 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=wb_check_background_flush
> >
> > What's that bdi 7:0? And sb_dev=0:0, nr_pages=9223372036854775807=0x7fffffffffffffff.
> >
> > All are indicating some special bdi/inode.
>
> #define LOOP_MAJOR 7
>
> It's a loop device. xfstests uses them quite a lot.
Yeah, it is.
> Maybe it would be a good idea to run xfstests on an xfs filesystem
> in your regular writeback testing cycle to get decent coverage of
> this case?
I've run xfstests case 073 on two of my boxes, however still cannot reproduce
the problem. This is the script I used, anything wrong with it?
#!/bin/sh
export TEST_DEV=/dev/sda5
export TEST_DIR=/mnt/test
export SCRATCH_DEV=/dev/sda6
export SCRATCH_MNT=/mnt/scratch
mount $TEST_DEV $TEST_DIR
./check 073
And the interesting thing is, that test case always fails in one box
and succeed in another.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists