[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJM4ZK8cKI4AmOgy@manet.1015granger.net>
Date: Wed, 21 Jun 2023 13:50:28 -0400
From: Chuck Lever <cel@...nel.org>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
intel-gfx@...ts.freedesktop.org, linux-afs@...ts.infradead.org,
linux-nfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 09/13] net: Convert sunrpc from pagevec to folio_batch
On Wed, Jun 21, 2023 at 05:45:53PM +0100, Matthew Wilcox (Oracle) wrote:
> Remove the last usage of pagevecs. There is a slight change here; we
> now free the folio_batch as soon as it fills up instead of freeing the
> folio_batch when we try to add a page to a full batch. This should have
> no effect in practice.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
I don't yet have visibility into the folio_batch_* helpers, but this
looks like a wholly mechanical replacement of pagevec. LGTM.
I assume this is going to be merged via another tree, not nfsd-next,
so:
Acked-by: Chuck Lever <chuck.lever@...cle.com>
> ---
> include/linux/sunrpc/svc.h | 2 +-
> net/sunrpc/svc.c | 10 +++++-----
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index c2807e301790..f8751118c122 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -222,7 +222,7 @@ struct svc_rqst {
> struct page * *rq_next_page; /* next reply page to use */
> struct page * *rq_page_end; /* one past the last page */
>
> - struct pagevec rq_pvec;
> + struct folio_batch rq_fbatch;
> struct kvec rq_vec[RPCSVC_MAXPAGES]; /* generally useful.. */
> struct bio_vec rq_bvec[RPCSVC_MAXPAGES];
>
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index e7c101290425..587811a002c9 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -640,7 +640,7 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node)
> if (!rqstp)
> return rqstp;
>
> - pagevec_init(&rqstp->rq_pvec);
> + folio_batch_init(&rqstp->rq_fbatch);
>
> __set_bit(RQ_BUSY, &rqstp->rq_flags);
> rqstp->rq_server = serv;
> @@ -851,9 +851,9 @@ bool svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page)
> }
>
> if (*rqstp->rq_next_page) {
> - if (!pagevec_space(&rqstp->rq_pvec))
> - __pagevec_release(&rqstp->rq_pvec);
> - pagevec_add(&rqstp->rq_pvec, *rqstp->rq_next_page);
> + if (!folio_batch_add(&rqstp->rq_fbatch,
> + page_folio(*rqstp->rq_next_page)))
> + __folio_batch_release(&rqstp->rq_fbatch);
> }
>
> get_page(page);
> @@ -887,7 +887,7 @@ void svc_rqst_release_pages(struct svc_rqst *rqstp)
> void
> svc_rqst_free(struct svc_rqst *rqstp)
> {
> - pagevec_release(&rqstp->rq_pvec);
> + folio_batch_release(&rqstp->rq_fbatch);
> svc_release_buffer(rqstp);
> if (rqstp->rq_scratch_page)
> put_page(rqstp->rq_scratch_page);
> --
> 2.39.2
>
Powered by blists - more mailing lists