[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A03755D2-3EEB-4A21-9302-6F03316F2709@oracle.com>
Date: Thu, 30 Mar 2023 16:36:34 +0000
From: Chuck Lever III <chuck.lever@...cle.com>
To: David Howells <dhowells@...hat.com>
CC: Matthew Wilcox <willy@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Jens Axboe <axboe@...nel.dk>, Jeff Layton <jlayton@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"open list:NETWORKING [GENERAL]" <netdev@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Trond Myklebust <trond.myklebust@...merspace.com>,
Anna Schumaker <anna@...nel.org>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>
Subject: Re: [RFC PATCH v2 40/48] sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather
then sendpage
> On Mar 30, 2023, at 10:26 AM, David Howells <dhowells@...hat.com> wrote:
>
> Chuck Lever III <chuck.lever@...cle.com> wrote:
>
>> Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and
>> leave the marker alone for now, please.
>
> If you insist. See attached.
Very good, thank you for accommodating my regression paranoia.
Acked-by: Chuck Lever <chuck.lever@...cle.com>
>
> David
> ---
> sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage
>
> When transmitting data, call down into TCP using sendmsg with
> MSG_SPLICE_PAGES to indicate that content should be spliced rather than
> performing sendpage calls to transmit header, data pages and trailer.
>
> Signed-off-by: David Howells <dhowells@...hat.com>
> cc: Trond Myklebust <trond.myklebust@...merspace.com>
> cc: Anna Schumaker <anna@...nel.org>
> cc: Chuck Lever <chuck.lever@...cle.com>
> cc: Jeff Layton <jlayton@...nel.org>
> cc: "David S. Miller" <davem@...emloft.net>
> cc: Eric Dumazet <edumazet@...gle.com>
> cc: Jakub Kicinski <kuba@...nel.org>
> cc: Paolo Abeni <pabeni@...hat.com>
> cc: Jens Axboe <axboe@...nel.dk>
> cc: Matthew Wilcox <willy@...radead.org>
> cc: linux-nfs@...r.kernel.org
> cc: netdev@...r.kernel.org
> ---
> include/linux/sunrpc/svc.h | 11 +++++------
> net/sunrpc/svcsock.c | 40 +++++++++++++---------------------------
> 2 files changed, 18 insertions(+), 33 deletions(-)
>
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 877891536c2f..456ae554aa11 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv)
> extern u32 svc_max_payload(const struct svc_rqst *rqstp);
>
> /*
> - * RPC Requsts and replies are stored in one or more pages.
> + * RPC Requests and replies are stored in one or more pages.
> * We maintain an array of pages for each server thread.
> * Requests are copied into these pages as they arrive. Remaining
> * pages are available to write the reply into.
> *
> - * Pages are sent using ->sendpage so each server thread needs to
> - * allocate more to replace those used in sending. To help keep track
> - * of these pages we have a receive list where all pages initialy live,
> - * and a send list where pages are moved to when there are to be part
> - * of a reply.
> + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread
> + * needs to allocate more to replace those used in sending. To help keep track
> + * of these pages we have a receive list where all pages initialy live, and a
> + * send list where pages are moved to when there are to be part of a reply.
> *
> * We use xdr_buf for holding responses as it fits well with NFS
> * read responses (that have a header, and some data pages, and possibly
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index 03a4f5615086..af146e053dfc 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -1059,17 +1059,18 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
> svc_xprt_received(rqstp->rq_xprt);
> return 0; /* record not complete */
> }
> -
> +
> static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec,
> int flags)
> {
> - return kernel_sendpage(sock, virt_to_page(vec->iov_base),
> - offset_in_page(vec->iov_base),
> - vec->iov_len, flags);
> + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, };
> +
> + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len);
> + return sock_sendmsg(sock, &msg);
> }
>
> /*
> - * kernel_sendpage() is used exclusively to reduce the number of
> + * MSG_SPLICE_PAGES is used exclusively to reduce the number of
> * copy operations in this path. Therefore the caller must ensure
> * that the pages backing @xdr are unchanging.
> *
> @@ -1109,28 +1110,13 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
> if (ret != head->iov_len)
> goto out;
>
> - if (xdr->page_len) {
> - unsigned int offset, len, remaining;
> - struct bio_vec *bvec;
> -
> - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
> - offset = offset_in_page(xdr->page_base);
> - remaining = xdr->page_len;
> - while (remaining > 0) {
> - len = min(remaining, bvec->bv_len - offset);
> - ret = kernel_sendpage(sock, bvec->bv_page,
> - bvec->bv_offset + offset,
> - len, 0);
> - if (ret < 0)
> - return ret;
> - *sentp += ret;
> - if (ret != len)
> - goto out;
> - remaining -= len;
> - offset = 0;
> - bvec++;
> - }
> - }
> + msg.msg_flags = MSG_SPLICE_PAGES;
> + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec,
> + xdr_buf_pagecount(xdr), xdr->page_len);
> + ret = sock_sendmsg(sock, &msg);
> + if (ret < 0)
> + return ret;
> + *sentp += ret;
>
> if (tail->iov_len) {
> ret = svc_tcp_send_kvec(sock, tail, 0);
>
--
Chuck Lever
Powered by blists - more mailing lists