[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5CD4BF8B-402C-4735-BF04-52B8D62F5EED@oracle.com>
Date: Mon, 8 Feb 2021 19:48:39 +0000
From: Chuck Lever <chuck.lever@...cle.com>
To: Trond Myklebust <trondmy@...merspace.com>
CC: "sashal@...nel.org" <sashal@...nel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
"daire@...g.com" <daire@...g.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH AUTOSEL 5.10 03/45] SUNRPC: Handle TCP socket sends with
kernel_sendpage() again
> On Feb 8, 2021, at 2:34 PM, Trond Myklebust <trondmy@...merspace.com> wrote:
>
> On Tue, 2021-01-19 at 20:25 -0500, Sasha Levin wrote:
>> From: Chuck Lever <chuck.lever@...cle.com>
>>
>> [ Upstream commit 4a85a6a3320b4a622315d2e0ea91a1d2b013bce4 ]
>>
>> Daire Byrne reports a ~50% aggregrate throughput regression on his
>> Linux NFS server after commit da1661b93bf4 ("SUNRPC: Teach server to
>> use xprt_sock_sendmsg for socket sends"), which replaced
>> kernel_send_page() calls in NFSD's socket send path with calls to
>> sock_sendmsg() using iov_iter.
>>
>> Investigation showed that tcp_sendmsg() was not using zero-copy to
>> send the xdr_buf's bvec pages, but instead was relying on memcpy.
>> This means copying every byte of a large NFS READ payload.
>>
>> It looks like TLS sockets do indeed support a ->sendpage method,
>> so it's really not necessary to use xprt_sock_sendmsg() to support
>> TLS fully on the server. A mechanical reversion of da1661b93bf4 is
>> not possible at this point, but we can re-implement the server's
>> TCP socket sendmsg path using kernel_sendpage().
>>
>> Reported-by: Daire Byrne <daire@...g.com>
>> BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=209439
>> Signed-off-by: Chuck Lever <chuck.lever@...cle.com>
>> Signed-off-by: Sasha Levin <sashal@...nel.org>
>> ---
>> net/sunrpc/svcsock.c | 86
>> +++++++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 85 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> index c2752e2b9ce34..4404c491eb388 100644
>> --- a/net/sunrpc/svcsock.c
>> +++ b/net/sunrpc/svcsock.c
>> @@ -1062,6 +1062,90 @@ static int svc_tcp_recvfrom(struct svc_rqst
>> *rqstp)
>> return 0; /* record not complete */
>> }
>>
>> +static int svc_tcp_send_kvec(struct socket *sock, const struct kvec
>> *vec,
>> + int flags)
>> +{
>> + return kernel_sendpage(sock, virt_to_page(vec->iov_base),
>> + offset_in_page(vec->iov_base),
>> + vec->iov_len, flags);
Thanks for your review!
> I'm having trouble with this line. This looks like it is trying to push
> a slab page into kernel_sendpage().
The head and tail kvec's in rq_res are not kmalloc'd, they are
backed by pages in rqstp->rq_pages[].
> What guarantees that the nfsd
> thread won't call kfree() before the socket layer is done transmitting
> the page?
If I understand correctly what Neil told us last week, the page
reference count on those pages is set up so that one of
svc_xprt_release() or the network layer does the final put_page(),
in a safe fashion.
Before da1661b93bf4 ("SUNRPC: Teach server to use xprt_sock_sendmsg
for socket sends"), the original svc_send_common() code did this:
- /* send head */
- if (slen == xdr->head[0].iov_len)
- flags = 0;
- len = kernel_sendpage(sock, headpage, headoffset,
- xdr->head[0].iov_len, flags);
- if (len != xdr->head[0].iov_len)
- goto out;
- slen -= xdr->head[0].iov_len;
- if (slen == 0)
- goto out;
--
Chuck Lever
Powered by blists - more mailing lists