lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsLluKb1v5SqN2xD@codewreck.org>
Date:   Mon, 4 Jul 2022 22:06:00 +0900
From:   Dominique Martinet <asmadeus@...ewreck.org>
To:     Christian Schoenebeck <linux_oss@...debyte.com>
Cc:     Kent Overstreet <kent.overstreet@...il.com>,
        linux-kernel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
        Eric Van Hensbergen <ericvh@...il.com>,
        Latchesar Ionkov <lucho@...kov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs

Christian Schoenebeck wrote on Mon, Jul 04, 2022 at 01:12:51PM +0200:
> On Montag, 4. Juli 2022 05:38:46 CEST Dominique Martinet wrote:
> > +Christian, sorry I just noticed you weren't in Ccs again --
> > the patches are currently there if you want a look:
> > https://evilpiepirate.org/git/bcachefs.git/log/?h=9p_mempool
> 
> I wonder whether it would make sense to update 9p section in MAINTAINERS to 
> better reflect current reality, at least in a way such that contributors would 
> CC me right away?

I almost suggested that, but Al Viro didn't cc Eric/Latchesar on the
previous series so I'm not sure how much people rely on MAINTAINERS and
how much of it is muscle memory...
Well, can't hurt to try at least -- giving Eric and Latchesar a chance
to reply.



Replying out of order

> However that's exactly what I was going to address with my already posted 
> patches (relevant patches regarding this issue here being 9..12):
> https://lore.kernel.org/all/cover.1640870037.git.linux_oss@crudebyte.com/
> And in the cover letter (section "STILL TODO" ... "3.") I was suggesting to 
> subsequently subdivide kmem_cache_alloc() into e.g. 4 allocation size 
> categories? Because that's what my already posted patches do anyway.

Yes, I hinted at that by asking if it'd be worth a second mempool for 8k
buffers, but I'm not sure it is -- I think kmalloc will just be as fast
for these in practice? That would need checking.

But I also took a fresh look just now and didn't remember we had so many
different cases there, and that msize is no longer really used -- now
this is just a gut feeling, but I think we'd benefit from rounding up to
some pooled sizes e.g. I assume it'll be faster to allocate 1MB from the
cache three times than try to get 500k/600k/1MB from kmalloc.

That's a lot of assuming though and this is going to need checking...


> Hoo, Dominique, please hold your horses. I currently can't keep up with 
> reviewing and testing all pending 9p patches right now.
> 
> Personally I would hold these patches back for now. They would make sense on 
> current situation on master, because ATM basically all 9p requests simply 
> allocate exactly 'msize' for any 9p request.

So I think they're orthogonal really:
what mempool does is that it'll reserve the minimum amount of memory
required for x allocations (whatever min is set during init, so here 4
parallel RPCs) -- if normal allocation goes through it'll go through
normal slab allocation first, and if that fails we'll get a buffer from
the pool instead, and if there is none left it'll wait for a previous
request to be freed up possibly throttling the number of parallel
requests down but never failing like we currently do.

What will happen with your patches is we'll get less large buffers
allocated so we can reduce the reserved amount of buffers, but there
will still be some so Kent is still going to be just as likely to see
high order allocation failures. The memory pressure and difficulty to
allocate high order pages doesn't necessarily comes from other 9p
requests so just having less msize-sized buffers might help a bit but I
don't think it'll be enough (I remember some similar failures with
lustre and 256k allocs, and it's just buffer cache and whats
not... because we're doing these allocs with NOFS these can't be
reclaimed)
With this the worst that can happen is some RPCs will be delayed, and
the patch already falls back to allocating a msize buffer from pool even
if less is requrested if that kmalloc failed, so I think it should work
out ok as a first iteration.

(I appreciate the need for testing, but this feels much less risky than
the iovec series we've had recently... Famous last words?)


For later iterations we might want to optimize with multiple sizes of
pools and pick the closest majoring size or something, but I think
that'll be tricky to get right so I'd rather not rush such an
optimization.


> How about I address the already discussed issues and post a v5 of those 
> patches this week and then we can continue from there?

I would have been happy to rebase your patches 9..12 on top of Kent's
this weekend but if you want to refresh them this week we can continue
from there, sure.

--
Dominique

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ