[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <E1E68CB0-3A46-4040-97F7-89E03FC11C9E@oracle.com>
Date: Mon, 8 Feb 2021 20:17:58 +0000
From: Chuck Lever <chuck.lever@...cle.com>
To: Trond Myklebust <trondmy@...merspace.com>
CC: "sashal@...nel.org" <sashal@...nel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"daire@...g.com" <daire@...g.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH AUTOSEL 5.10 03/45] SUNRPC: Handle TCP socket sends with
kernel_sendpage() again
> On Feb 8, 2021, at 3:12 PM, Trond Myklebust <trondmy@...merspace.com> wrote:
>
> On Mon, 2021-02-08 at 19:48 +0000, Chuck Lever wrote:
>>
>>
>>> On Feb 8, 2021, at 2:34 PM, Trond Myklebust <
>>> trondmy@...merspace.com> wrote:
>>>
>>> On Tue, 2021-01-19 at 20:25 -0500, Sasha Levin wrote:
>>>> From: Chuck Lever <chuck.lever@...cle.com>
>>>>
>>>> [ Upstream commit 4a85a6a3320b4a622315d2e0ea91a1d2b013bce4 ]
>>>>
>>>> Daire Byrne reports a ~50% aggregrate throughput regression on
>>>> his
>>>> Linux NFS server after commit da1661b93bf4 ("SUNRPC: Teach server
>>>> to
>>>> use xprt_sock_sendmsg for socket sends"), which replaced
>>>> kernel_send_page() calls in NFSD's socket send path with calls to
>>>> sock_sendmsg() using iov_iter.
>>>>
>>>> Investigation showed that tcp_sendmsg() was not using zero-copy
>>>> to
>>>> send the xdr_buf's bvec pages, but instead was relying on memcpy.
>>>> This means copying every byte of a large NFS READ payload.
>>>>
>>>> It looks like TLS sockets do indeed support a ->sendpage method,
>>>> so it's really not necessary to use xprt_sock_sendmsg() to
>>>> support
>>>> TLS fully on the server. A mechanical reversion of da1661b93bf4
>>>> is
>>>> not possible at this point, but we can re-implement the server's
>>>> TCP socket sendmsg path using kernel_sendpage().
>>>>
>>>> Reported-by: Daire Byrne <daire@...g.com>
>>>> BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=209439
>>>> Signed-off-by: Chuck Lever <chuck.lever@...cle.com>
>>>> Signed-off-by: Sasha Levin <sashal@...nel.org>
>>>> ---
>>>> net/sunrpc/svcsock.c | 86
>>>> +++++++++++++++++++++++++++++++++++++++++++-
>>>> 1 file changed, 85 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>>>> index c2752e2b9ce34..4404c491eb388 100644
>>>> --- a/net/sunrpc/svcsock.c
>>>> +++ b/net/sunrpc/svcsock.c
>>>> @@ -1062,6 +1062,90 @@ static int svc_tcp_recvfrom(struct
>>>> svc_rqst
>>>> *rqstp)
>>>> return 0; /* record not complete */
>>>> }
>>>>
>>>> +static int svc_tcp_send_kvec(struct socket *sock, const struct
>>>> kvec
>>>> *vec,
>>>> + int flags)
>>>> +{
>>>> + return kernel_sendpage(sock, virt_to_page(vec->iov_base),
>>>> + offset_in_page(vec->iov_base),
>>>> + vec->iov_len, flags);
>>
>> Thanks for your review!
>>
>>> I'm having trouble with this line. This looks like it is trying to
>>> push
>>> a slab page into kernel_sendpage().
>>
>> The head and tail kvec's in rq_res are not kmalloc'd, they are
>> backed by pages in rqstp->rq_pages[].
>>
>>
>>> What guarantees that the nfsd
>>> thread won't call kfree() before the socket layer is done
>>> transmitting
>>> the page?
>>
>> If I understand correctly what Neil told us last week, the page
>> reference count on those pages is set up so that one of
>> svc_xprt_release() or the network layer does the final put_page(),
>> in a safe fashion.
>>
>> Before da1661b93bf4 ("SUNRPC: Teach server to use xprt_sock_sendmsg
>> for socket sends"), the original svc_send_common() code did this:
>>
>> - /* send head */
>> - if (slen == xdr->head[0].iov_len)
>> - flags = 0;
>> - len = kernel_sendpage(sock, headpage, headoffset,
>> - xdr->head[0].iov_len, flags);
>> - if (len != xdr->head[0].iov_len)
>> - goto out;
>> - slen -= xdr->head[0].iov_len;
>> - if (slen == 0)
>> - goto out;
>>
>>
>>
>
> OK, so then only the argument kvec can be allocated on the slab (thanks
> to svc_deferred_recv)? Is that correct?
The RPC/RDMA Receive buffer is kmalloc'd, that would be used for
rq_arg.head/tail. But for TCP, I believe the head kvec is always
pulled out of rq_pages[].
svc_process() sets up rq_res.head this way:
1508 resv->iov_base = page_address(rqstp->rq_respages[0]);
1509 resv->iov_len = 0;
I would need to audit the code to confirm that rq_res.tail is never
kmalloc'd.
--
Chuck Lever
Powered by blists - more mailing lists