[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150820143626.GI17933@dhcp-13-216.nay.redhat.com>
Date: Thu, 20 Aug 2015 22:36:26 +0800
From: Eryu Guan <eguan@...hat.com>
To: Dave Chinner <david@...morbit.com>
Cc: Tejun Heo <tj@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
xfs@....sgi.com, axboe@...com, Jan Kara <jack@...e.com>,
linux-fsdevel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH block/for-linus] writeback: fix syncing of I_DIRTY_TIME
inodes
On Thu, Aug 20, 2015 at 02:12:24PM +0800, Eryu Guan wrote:
> On Wed, Aug 19, 2015 at 07:56:11AM +1000, Dave Chinner wrote:
> > On Tue, Aug 18, 2015 at 12:54:39PM -0700, Tejun Heo wrote:
> > > Hello,
> > >
> > > On Tue, Aug 18, 2015 at 10:47:18AM -0700, Tejun Heo wrote:
> > > > Hmm... the only possibility I can think of is tot_write_bandwidth
> > > > being zero when it shouldn't be. I've been staring at the code for a
> > > > while now but nothing rings a bell. Time for another debug patch, I
> > > > guess.
> > >
> > > So, I can now reproduce the bug (it takes a lot of trials but lowering
> > > the number of tested files helps quite a bit) and instrumented all the
> > > early exit paths w/o the fix patch. bdi_has_dirty_io() and
> > > wb_has_dirty_io() are never out of sync with the actual dirty / io
> > > lists even when the test 048 fails, so the bug at least is not caused
> > > by writeback skipping due to buggy bdi/wb_has_dirty_io() result.
> > > Whenever it skips, all the lists are actually empty (verified while
> > > holding list_lock).
> > >
> > > One suspicion I have is that this could be a subtle timing issue which
> > > is being exposed by the new short-cut path. Anything which adds delay
> > > seems to make the issue go away. Dave, does anything ring a bell?
> >
> > No, it doesn't. The data writeback mechanisms XFS uses are all
> > generic. It marks inodes I_DIRTY_PAGES and lets the generic code
> > take care of everything else. Yes, we do delayed allocation during
> > writeback, and we log the inode size updates during IO completion,
> > so if inode sizes are not getting updated, then Occam's Razor
> > suggests that writeback is not happening.
> >
> > I'd suggest looking at some of the XFS tracepoints during the test:
> >
> > tracepoint trigger
> > xfs_file_buffered_write once per write syscall
> > xfs_file_sync once per fsync per inode
> > xfs_vm_writepage every ->writepage call
> > xfs_setfilesize every IO completion that updates inode size
>
> I gave the tracepoints a try, but my root fs is xfs so I got many
> noises. I'll try to install a new vm with ext4 as root fs. But I'm not
> sure if the new vm could reproduce the failure, will see.
I installed a new vm with ext4 as root fs and got some trace info.
On the new vm, only generic/048 is reproducible, generic/049 always
passes. And I can only reproduce generic/048 when xfs tracepoints are
enabled, if writeback tracepoints are enabled too, I can no longer
reproduce the failure.
All tests are done on 4.2-rc7 kernel.
This is the trace-cmd I'm using:
cd /mnt/ext4
trace-cmd record -e xfs_file_buffered_write \
-e xfs_file_fsync \
-e xfs_writepage \
-e xfs_setfilesize &
pushd /path/to/xfstests
./check generic/048
popd
kill -s 2 $!
trace-cmd report >trace_report.txt
I attached three files:
1) xfs-trace-generic-048.txt.bz2[1] trace report result
2) xfs-trace-generic-048.diff generic/048 failure diff output, could know which files has incorrect size
3) xfs-trace-generic-048.metadump.bz2 metadump of SCRATCH_DEV, which contains the test files
If more info is needed please let me know.
Thanks,
Eryu
[1] attach this file in a following mail, to avoid xfs list 500k limit
View attachment "xfs-trace-generic-048.diff" of type "text/plain" (706 bytes)
Download attachment "xfs-trace-generic-048.metadump.bz2" of type "application/x-bzip2" (98265 bytes)
Powered by blists - more mailing lists