[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID:
<168979108540.1905271.9720708849149797793.stgit@morisot.1015granger.net>
Date: Wed, 19 Jul 2023 14:30:56 -0400
From: Chuck Lever <cel@...nel.org>
To: linux-nfs@...r.kernel.org, netdev@...r.kernel.org
Cc: Chuck Lever <chuck.lever@...cle.com>, David Howells <dhowells@...hat.com>,
dhowells@...hat.com
Subject: [PATCH v3 0/5] Send RPC-on-TCP with one sock_sendmsg() call
After some discussion with David Howells at LSF/MM 2023, we arrived
at a plan to use a single sock_sendmsg() call for transmitting an
RPC message on socket-based transports. This is an initial part of
the transition to support handling folios with file content, but it
has scalability benefits as well.
Initial performance benchmark results show 5-10% throughput gains
with a fast link layer and a tmpfs export. I've added some other
ideas to this series for further discussion -- these have also shown
performance benefits in my testing.
Changes since v2:
* Keep rq_bvec instead of switching to a per-transport bio_vec array
* Remove the cork/uncork logic in svc_tcp_sendto
* Attempt to mitigate wake-up storms when receiving large RPC messages
Changes since RFC:
* Moved xdr_buf-to-bio_vec array helper to generic XDR code
* Added bio_vec array bounds-checking
* Re-ordered patches
---
Chuck Lever (5):
SUNRPC: Convert svc_tcp_sendmsg to use bio_vecs directly
SUNRPC: Send RPC message on TCP with a single sock_sendmsg() call
SUNRPC: Convert svc_udp_sendto() to use the per-socket bio_vec array
SUNRPC: Revert e0a912e8ddba
SUNRPC: Reduce thread wake-up rate when receiving large RPC messages
include/linux/sunrpc/svcsock.h | 4 +-
include/linux/sunrpc/xdr.h | 2 +
net/sunrpc/svcsock.c | 127 +++++++++++++++------------------
net/sunrpc/xdr.c | 50 +++++++++++++
4 files changed, 112 insertions(+), 71 deletions(-)
--
Chuck Lever
Powered by blists - more mailing lists