[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161202221122.296f130e@duuni>
Date: Fri, 2 Dec 2016 22:11:22 +0200
From: Tuomas Tynkkynen <tuomas@...era.com>
To: Eric Van Hensbergen <ericvh@...il.com>
CC: V9FS Developers <v9fs-developer@...ts.sourceforge.net>,
Linux FS Devel <linux-fsdevel@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: 9pfs hangs since 4.7
On Tue, 29 Nov 2016 10:39:39 -0600
Eric Van Hensbergen <ericvh@...il.com> wrote:
> Any idea of what xfstests is doing at this point in time? I'd be a
> bit worried about some sort of loop in the namespace since it seems to
> be in path traversal. Could also be some sort of resource leak or
> fragmentation, I'll admit that many of the regression tests we do are
> fairly short in duration. Another approach would be to look at doing
> this with a different server (over a network link instead of virtio)
> to isolate it as a client versus server side problem (although from
> the looks of things this does seem to be a client issue).
The xfstests part where it hangs is either of these loops:
FILES=1000
for i in `seq 0 1 $FILES`; do
(
sleep 5
xfs_io -f -c "truncate 10485760" $SCRATCH_MNT/testfile.$i
dd if=/dev/zero of=$SCRATCH_MNT/testfile.$i bs=4k conv=notrunc
) > /dev/null 2>&1 &
done
wait
for i in `seq 0 1 $FILES`; do
dd of=/dev/null if=$SCRATCH_MNT/testfile.$i bs=512k iflag=direct > /dev/null 2>&1 &
done
wait
So all what's happening on the 9p is a bunch of reads+opens
on the binaries (sleep, xfs_io, dd) and their .so dependencies
(which includes some readlinks as well apparently).
I also tried building QEMU with tracing support enabled and
according to its own 9p event log the server did end up replying
to each client request (i.e. each v9fs_foo with a given tag was
was matched up with a v9fs_foo_return or a v9fs_rerror)...
so yes, looking more like a client problem.
Powered by blists - more mailing lists