[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9a42187cdc9ce034fd23179c7b31d7cc6a54bd45.1660124059.git.asml.silence@gmail.com>
Date: Wed, 10 Aug 2022 16:49:15 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: io-uring@...r.kernel.org, netdev@...r.kernel.org
Cc: Jens Axboe <axboe@...nel.dk>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, kernel-team@...com,
linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
Wei Liu <wei.liu@...nel.org>, Paul Durrant <paul@....org>,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Pavel Begunkov <asml.silence@...il.com>
Subject: [RFC net-next io_uring 07/11] net/tcp: optimise tcp ubuf refcounting
Add UARGFL_CALLER_PINNED letting protocols know that the caller holds a
reference to the ubuf_info and so it doesn't need additional refcounting
for purposes of keeping it alive. With that TCP can save a refcount
put/get pair per send when used with ->msg_ubuf.
Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
---
include/linux/skbuff.h | 7 +++++++
net/ipv4/tcp.c | 9 ++++++---
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 2b2e0020030b..45fe7f0648d0 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -522,6 +522,13 @@ enum {
#define SKBFL_ALL_ZEROCOPY (SKBFL_ZEROCOPY_FRAG | SKBFL_PURE_ZEROCOPY | \
SKBFL_DONT_ORPHAN | SKBFL_MANAGED_FRAG_REFS)
+enum {
+ /* The caller holds a reference during the submission so the ubuf won't
+ * be freed until we return.
+ */
+ UARGFL_CALLER_PINNED = BIT(0),
+};
+
/*
* The callback notifies userspace to release buffers when skb DMA is done in
* lower device, the skb last reference should be 0 when calling this.
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 3152da8f4763..4925107de57d 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1229,7 +1229,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
if (msg->msg_ubuf) {
uarg = msg->msg_ubuf;
- net_zcopy_get(uarg);
+ if (!(uarg->flags & UARGFL_CALLER_PINNED))
+ net_zcopy_get(uarg);
zc = sk->sk_route_caps & NETIF_F_SG;
} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
uarg = msg_zerocopy_realloc(sk, size, skb_zcopy(skb));
@@ -1455,7 +1456,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
tcp_push(sk, flags, mss_now, tp->nonagle, size_goal);
}
out_nopush:
- net_zcopy_put(uarg);
+ if (uarg && !(uarg->flags & UARGFL_CALLER_PINNED))
+ net_zcopy_put(uarg);
return copied + copied_syn;
do_error:
@@ -1464,7 +1466,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
if (copied + copied_syn)
goto out;
out_err:
- net_zcopy_put_abort(uarg, true);
+ if (uarg && !(uarg->flags & UARGFL_CALLER_PINNED))
+ net_zcopy_put_abort(uarg, true);
err = sk_stream_error(sk, flags, err);
/* make sure we wake any epoll edge trigger waiter */
if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err == -EAGAIN)) {
--
2.37.0
Powered by blists - more mailing lists