lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2211309.MyIe47cYEz@silver>
Date:   Sat, 09 Jul 2022 20:08:41 +0200
From:   Christian Schoenebeck <linux_oss@...debyte.com>
To:     Dominique Martinet <asmadeus@...ewreck.org>,
        Kent Overstreet <kent.overstreet@...il.com>
Cc:     linux-kernel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
        Eric Van Hensbergen <ericvh@...il.com>,
        Latchesar Ionkov <lucho@...kov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs

On Samstag, 9. Juli 2022 16:42:53 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Sat, Jul 09, 2022 at 04:21:46PM +0200:
> > > The best thing to do would probably to just tell the client it can't use
> > > the mempools for flushes -- the flushes are rare and will use small
> > > buffers with your smaller allocations patch; I bet I wouldn't be able to
> > > reproduce that anymore but it should probably just forbid the mempool
> > > just in case.
> > 
> > So the problem is that one task ends up with more than 1 request at a
> > time,
> > and the buffer is allocated and associated per request, not per task. If I
> > am not missing something, then this scenario (>1 request simultaniously
> > per task) currently may actually only happen with p9_client_flush()
> > calls. Which simplifies the problem.
> 
> Yes that should be the only case where this happens.
> 
> > So probably the best way would be to simply flip the call order such that
> > p9_tag_remove() is called before p9_client_flush(), similar to how it's
> > already done with p9_client_clunk() calls?
> 
> I don't think we can do that safely without some extra work - because
> until we get the reply from the flush, the legitimate reply to the
> original request can still come. It's perfectly possible that by the
> time we sent the flush the server will have sent the normal reply to our
> original request -- actually with flush stuck there it's actually almost
> certain it has...

Mmm, I "think" that wouldn't be something new. There is no guarantee that 
client would not get a late response delivery by server of a request that 
client has already thrown away.

What happens on server side is: requests come in sequentially, and are started 
to be processed exactly in that order. But then they are actually running in 
parallel on worker threads, dispatched back and forth between threads several 
times. And Tflush itself is really just another request. So there is no 
guarantee that the response order corresponds to the order of requests 
originally sent by client, and if client sent a Tflush, it might still get a 
response to its causal, abolished "normal" request.

Best regards,
Christian Schoenebeck


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ