lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 09 Jul 2022 16:21:46 +0200
From:   Christian Schoenebeck <linux_oss@...debyte.com>
To:     Dominique Martinet <asmadeus@...ewreck.org>,
        Kent Overstreet <kent.overstreet@...il.com>
Cc:     linux-kernel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
        Eric Van Hensbergen <ericvh@...il.com>,
        Latchesar Ionkov <lucho@...kov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs

On Samstag, 9. Juli 2022 09:43:47 CEST Dominique Martinet wrote:
> I've taken the mempool patches to 9p-next
> 
> Christian Schoenebeck wrote on Mon, Jul 04, 2022 at 03:56:55PM +0200:
> >> (I appreciate the need for testing, but this feels much less risky than
> >> the iovec series we've had recently... Famous last words?)
> > 
> > Got it, consider my famous last words dropped. ;-)
> 
> Ok, so I think you won this one...
> 
> Well -- when testing normally it obviously works well, performance wise
> is roughly the same (obviously since it tries to allocate from slab
> first and in normal case that will work)
> 
> When I tried gaming it with very low memory though I thought it worked
> well, but I managed to get a bunch of processes stuck in mempool_alloc
> with no obvious tid waiting for a reply.
> I had the bright idea of using fio with io_uring and interestingly the
> uring worker doesn't show up in ps or /proc/<pid>, but with qemu's gdb
> and lx-ps I could find a bunch of iou-wrk-<pid> that are all with
> similar stacks
>    1   │ [<0>] mempool_alloc+0x136/0x180
>    2   │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
>    3   │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
>    4   │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
>    5   │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
>    6   │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
>    7   │ [<0>] io_write+0x129/0x2c0
>    8   │ [<0>] io_issue_sqe+0xa1/0x25b0
>    9   │ [<0>] io_wq_submit_work+0x90/0x190
>   10   │ [<0>] io_worker_handle_work+0x211/0x550
>   11   │ [<0>] io_wqe_worker+0x2c5/0x340
>   12   │ [<0>] ret_from_fork+0x1f/0x30
> 
> or, and that's the interesting part
>    1   │ [<0>] mempool_alloc+0x136/0x180
>    2   │ [<0>] p9_fcall_init+0x63/0x80 [9pnet]
>    3   │ [<0>] p9_client_prepare_req+0xa9/0x290 [9pnet]
>    4   │ [<0>] p9_client_rpc+0x64/0x610 [9pnet]
>    5   │ [<0>] p9_client_flush+0x81/0xc0 [9pnet]
>    6   │ [<0>] p9_client_rpc+0x591/0x610 [9pnet]
>    7   │ [<0>] p9_client_write+0xcb/0x210 [9pnet]
>    8   │ [<0>] v9fs_file_write_iter+0x4d/0xc0 [9p]
>    9   │ [<0>] io_write+0x129/0x2c0
>   10   │ [<0>] io_issue_sqe+0xa1/0x25b0
>   11   │ [<0>] io_wq_submit_work+0x90/0x190
>   12   │ [<0>] io_worker_handle_work+0x211/0x550
>   13   │ [<0>] io_wqe_worker+0x2c5/0x340
>   14   │ [<0>] ret_from_fork+0x1f/0x30
> 
> The problem is these flushes : the same task is holding a buffer for the
> original rpc and tries to get a new one, but waits for someone to free
> and.. obviously there isn't anyone (I cound 11 flushes pending, so more
> than the minimum number of buffers we'd expect from the mempool, and I
> don't think we missed any free)
> 
> Now I'm not sure what's best here.
> The best thing to do would probably to just tell the client it can't use
> the mempools for flushes -- the flushes are rare and will use small
> buffers with your smaller allocations patch; I bet I wouldn't be able to
> reproduce that anymore but it should probably just forbid the mempool
> just in case.

So the problem is that one task ends up with more than 1 request at a time, 
and the buffer is allocated and associated per request, not per task. If I am 
not missing something, then this scenario (>1 request simultaniously per task) 
currently may actually only happen with p9_client_flush() calls. Which 
simplifies the problem.

So probably the best way would be to simply flip the call order such that 
p9_tag_remove() is called before p9_client_flush(), similar to how it's 
already done with p9_client_clunk() calls?

> Anyway, I'm not comfortable with this patch right now, a hang is worse
> than an allocation failure warning.

As you already mentioned, with the pending 'net/9p: allocate appropriate 
reduced message buffers' patch those hangs should not happen, as Tflush would 
then just kmalloc() a small buffer. But I would probably still fix this issue 
here nevertheless, as it might hurt in other ways in future. Shouldn't be too 
much noise to swap the call order, right?

> > > > How about I address the already discussed issues and post a v5 of
> > > > those
> > > > patches this week and then we can continue from there?
> > > 
> > > I would have been happy to rebase your patches 9..12 on top of Kent's
> > > this weekend but if you want to refresh them this week we can continue
> > > from there, sure.
> > 
> > I'll rebase them on master and address what we discussed so far. Then
> > we'll
> > see.
> 
> FWIW and regarding the other thread with virito queue sizes, I was only
> considering the later patches with small RPCs for this merge window.

I would also recommend to leave out the virtio patches, yes.

> Shall we try to focus on that first, and then revisit the virtio and
> mempool patches once that's done?

Your call. I think both ways are viable.

Best regards,
Christian Schoenebeck


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ