[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsmT7WHDh9NXZ/nV@codewreck.org>
Date: Sat, 9 Jul 2022 23:42:53 +0900
From: Dominique Martinet <asmadeus@...ewreck.org>
To: Christian Schoenebeck <linux_oss@...debyte.com>
Cc: Kent Overstreet <kent.overstreet@...il.com>,
linux-kernel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs
Christian Schoenebeck wrote on Sat, Jul 09, 2022 at 04:21:46PM +0200:
> > The best thing to do would probably to just tell the client it can't use
> > the mempools for flushes -- the flushes are rare and will use small
> > buffers with your smaller allocations patch; I bet I wouldn't be able to
> > reproduce that anymore but it should probably just forbid the mempool
> > just in case.
>
> So the problem is that one task ends up with more than 1 request at a time,
> and the buffer is allocated and associated per request, not per task. If I am
> not missing something, then this scenario (>1 request simultaniously per task)
> currently may actually only happen with p9_client_flush() calls. Which
> simplifies the problem.
Yes that should be the only case where this happens.
> So probably the best way would be to simply flip the call order such that
> p9_tag_remove() is called before p9_client_flush(), similar to how it's
> already done with p9_client_clunk() calls?
I don't think we can do that safely without some extra work - because
until we get the reply from the flush, the legitimate reply to the
original request can still come. It's perfectly possible that by the
time we sent the flush the server will have sent the normal reply to our
original request -- actually with flush stuck there it's actually almost
certain it has...
For trans fd for example the reads happen in a worker thread, if that
buffer disappears too early it'll fail with EIO and the whole mount will
break down as I think that'll just kill the read worker...
(actually how does that even work, it checks for rreq->status !=
REQ_STATUS_SENT but it should be FLSHD at this point .... erm)
In theory we can probably adjust the cancel() callback to make sure that
we never use the recv/send buffers from there on but it might be tricky,
still for tcp that's this 'm->rc.sdata' that will be read into and can
point to a recv buffer if we're mid-reading (e.g. got header for that
reply but it wasn't done in a single read() call and waiting for more
data); and that operatres without the client lock so we can't just take
it away easily...
Well, that'll need careful considerations... I think it'll be much
simpler to say flush calls are allocated weird and don't use the
mempool, even if it's another bit of legacy that'll be hard to
understand why we did that down the road...
> > Anyway, I'm not comfortable with this patch right now, a hang is worse
> > than an allocation failure warning.
>
> As you already mentioned, with the pending 'net/9p: allocate appropriate
> reduced message buffers' patch those hangs should not happen, as Tflush would
> then just kmalloc() a small buffer. But I would probably still fix this issue
> here nevertheless, as it might hurt in other ways in future. Shouldn't be too
> much noise to swap the call order, right?
I definitely want to fix this even with your patches, yes.
--
Dominique
Powered by blists - more mailing lists