[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170108054630.GL1555@ZenIV.linux.org.uk>
Date: Sun, 8 Jan 2017 05:46:39 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: Greg Kurz <groug@...d.org>
Cc: Tuomas Tynkkynen <tuomas@...era.com>,
linux-fsdevel@...r.kernel.org,
v9fs-developer@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [V9fs-developer] 9pfs hangs since 4.7
On Sat, Jan 07, 2017 at 06:15:23PM +0000, Al Viro wrote:
> On Sat, Jan 07, 2017 at 05:19:10PM +0000, Al Viro wrote:
>
> > released) simply trigger virtio_queue_notify_vq() again? It *is* a bug
> > (if we get a burst filling a previously empty queue all at once, there won't
> > be any slots becoming freed
>
> Umm... Nope, even in that scenario we'll have all requests except the last
> one processed. I'm trying to put together a simpler reproducer, but...
> no luck so far.
FWIW, here's another fun bug in qemu 9pfs: client may issue multiple
Tflush on the same pending request. Server must not reply to them
out of order and it must reply to at least the last one. If client has
sent more than one Tflush, it must treat an arrival of Rflush to one of
those as if it has received an Rflush to each of the earlier Tflush as
well (IOW, *their* tags are released). Client may not reuse the tag of
victim request until it has received Rflush for the last Tflush it has
sent.
Linux kernel-side 9p client doesn't issue such multiple Tflush, but the
protocol allows that.
qemu server does not guarantee in-order replies to multiple Tflush; worse,
while it is likely to reply only to one of those, it may very well be
the _first_ one. Subsequent ones will be lost; moreover, it'll leak
both coroutines and ring slots.
AFAICS, a clean way to deal with that would be to handle Tflush synchronously,
right in pdu_submit():
if pdu->type == Tflush
look the victim request up.
if !victim || victim == pdu // [*]
send reply and free pdu immediately.
if victim->type == Tflush // [**]
pdu->victim = victim->victim
else
pdu->victim = victim
mark the victim cancelled
add pdu to the tail of pdu->victim->flushes
and let pdu_complete(pdu) send a reply to each request in pdu->flushes
as well (freeing them as we go)
Some locking might be needed to avoid the races between pdu_submit() and
pdu_complete(), but it's not going to be a wide critical area. I'm not
familiar with qemu locking primitives, sorry; what's providing an exclusion
between QLIST_INSERT_HEAD in pdu_alloc() (from handle_9p_output())
and QLIST_REMOVE in pdu_free() (from pdu_complete())? Looks like the same
thing ought to suffice here...
[*] a cute way to hurt the current qemu server, BTW - coroutine that waits for
itself to complete...
[**] Tflush to Tflush is another fun corner case - it does *not* do anything
to the earlier Tflush, but reply to it must follow that to original Tflush
to avoid tag reuse problems. Note that this is not the "multiple Tflush"
case mentioned above - those would all have oldtag pointing to the same
request; they are not chained and unlike this one can happen with legitimate
clients. Tflush to Tflush, OTOH, is not something a sane client would do.
Powered by blists - more mailing lists