[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180801152248.GB21463@nautica>
Date: Wed, 1 Aug 2018 17:22:48 +0200
From: Dominique Martinet <asmadeus@...ewreck.org>
To: Greg Kurz <groug@...d.org>
Cc: v9fs-developer@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org,
Matthew Wilcox <willy@...radead.org>,
linux-kernel@...r.kernel.org
Subject: Re: [V9fs-developer] [PATCH 2/2] net/9p: add a per-client fcall
kmem_cache
Greg Kurz wrote on Wed, Aug 01, 2018:
> > diff --git a/net/9p/client.c b/net/9p/client.c
> > index ba99a94a12c9..215e3b1ed7b4 100644
> > --- a/net/9p/client.c
> > +++ b/net/9p/client.c
> > @@ -231,15 +231,34 @@ static int parse_opts(char *opts, struct p9_client *clnt)
> > return ret;
> > }
> >
> > -static int p9_fcall_alloc(struct p9_fcall *fc, int alloc_msize)
> > +static int p9_fcall_alloc(struct p9_client *c, struct p9_fcall *fc,
> > + int alloc_msize)
> > {
> > - fc->sdata = kmalloc(alloc_msize, GFP_NOFS);
> > + if (c->fcall_cache && alloc_msize == c->msize)
>
> This is a presumably hot path for any request but the initial TVERSION,
> you probably want likely() here...
c->fcall_cache is indeed very likely, but alloc_size == c->msize not so
much, as zc requests will be quite common for virtio and will be 4k in
size.
Although with that cache I'm starting to wonder if we should always use
it... Speed-wise if there is no memory pressure the cache is likely
going to be faster.
If there is pressure and the items are reclaimed though that'll bring
some heavier slow-down as it'll need to find bigger memory regions.
I'm not sure which path we should favor, tbh. I'll keep these separate
for now.
For the first part of the check, Matthew suggested trying to trick msize
into a different value to fail this check for the initial TVERSION call,
but even after thinking it a bit I don't really see how to do that
cleanly. I can at least make -that- likely()...
>
> > + fc->sdata = kmem_cache_alloc(c->fcall_cache, GFP_NOFS);
> > + else
> > + fc->sdata = kmalloc(alloc_msize, GFP_NOFS);
> > if (!fc->sdata)
> > return -ENOMEM;
> > fc->capacity = alloc_msize;
> > return 0;
> > }
> >
> > +void p9_fcall_free(struct p9_client *c, struct p9_fcall *fc)
> > +{
> > + /* sdata can be NULL for interrupted requests in trans_rdma,
> > + * and kmem_cache_free does not do NULL-check for us
> > + */
> > + if (unlikely(!fc->sdata))
> > + return;
> > +
> > + if (c->fcall_cache && fc->capacity == c->msize)
>
> ... and here as well.
For this one I'll unfortunately need to store in the fc how it has been
allocated as slob doesn't allow to kmem_cache_free() a buffer allocated
by kmalloc and in prevision of refs being refcounted in a hostile world
the initial TVERSION req could be freed after fcall_cache is created :/
That's a bit of a burden but at least will reduce the checks to one here
> > + kmem_cache_free(c->fcall_cache, fc->sdata);
> > + else
> > + kfree(fc->sdata);
> > +}
> > +EXPORT_SYMBOL(p9_fcall_free);
> > +
> > static struct kmem_cache *p9_req_cache;
> >
> > /**
Anyway I've had as many comments as I could hope for, thanks everyone
for the quick review.
I'll send a v2 of both patches tomorrow
--
Dominique
Powered by blists - more mailing lists