[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220704035258.yu7k6sras2eiywsp@moria.home.lan>
Date: Sun, 3 Jul 2022 23:52:58 -0400
From: Kent Overstreet <kent.overstreet@...il.com>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: Christian Schoenebeck <linux_oss@...debyte.com>,
linux-kernel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>
Subject: Re: [PATCH 3/3] 9p: Add mempools for RPCs
On Mon, Jul 04, 2022 at 12:38:46PM +0900, Dominique Martinet wrote:
> > @@ -270,10 +276,8 @@ p9_tag_alloc(struct p9_client *c, int8_t type, unsigned int max_size)
> > if (!req)
> > return ERR_PTR(-ENOMEM);
> >
> > - if (p9_fcall_init(c, &req->tc, alloc_msize))
> > - goto free_req;
> > - if (p9_fcall_init(c, &req->rc, alloc_msize))
> > - goto free;
> > + p9_fcall_init(c, &req->tc, 0, alloc_msize);
> > + p9_fcall_init(c, &req->rc, 1, alloc_msize);
>
>
> mempool allocation never fails, correct?
>
> (don't think this needs a comment, just making sure here)
As long as GFP_WAIT is included, yes
> This all looks good to me, will queue it up in my -next branch after
> running some tests next weekend and hopefully submit when 5.20 opens
> with the code making smaller allocs more common.
Sounds good!
Powered by blists - more mailing lists