[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <526fe4cb9cda287bedfc92b3888b48a4f3b0250b.1660124059.git.asml.silence@gmail.com>
Date: Wed, 10 Aug 2022 16:49:16 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: io-uring@...r.kernel.org, netdev@...r.kernel.org
Cc: Jens Axboe <axboe@...nel.dk>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, kernel-team@...com,
linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
Wei Liu <wei.liu@...nel.org>, Paul Durrant <paul@....org>,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Pavel Begunkov <asml.silence@...il.com>
Subject: [RFC net-next io_uring 08/11] net: let callers provide ->msg_ubuf refs
Some msg_ubuf providers like io_uring can keep elaborated ubuf_info
reference batching and caching, so it will be of benefit to let the
network layer to optionally steal some of the cached refs.
Add UARGFL_GIFT_REF, if set the caller has at least one extra reference
that it can gift away. If the network decides to take the ref it should
clear the flag.
Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
---
include/linux/skbuff.h | 14 ++++++++++++++
net/ipv4/ip_output.c | 1 +
net/ipv6/ip6_output.c | 1 +
3 files changed, 16 insertions(+)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 45fe7f0648d0..972ec676e222 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -527,6 +527,11 @@ enum {
* be freed until we return.
*/
UARGFL_CALLER_PINNED = BIT(0),
+
+ /* The caller can gift one ubuf reference. The flag should be cleared
+ * when the reference is taken.
+ */
+ UARGFL_GIFT_REF = BIT(1),
};
/*
@@ -1709,6 +1714,15 @@ static inline void net_zcopy_put(struct ubuf_info *uarg)
uarg->callback(NULL, uarg, true);
}
+static inline bool net_zcopy_get_gift_ref(struct ubuf_info *uarg)
+{
+ bool has_ref;
+
+ has_ref = uarg->flags & UARGFL_GIFT_REF;
+ uarg->flags &= ~UARGFL_GIFT_REF;
+ return has_ref;
+}
+
static inline void net_zcopy_put_abort(struct ubuf_info *uarg, bool have_uref)
{
if (uarg) {
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 546897a4b4fa..9d42b6dd6b78 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -1032,6 +1032,7 @@ static int __ip_append_data(struct sock *sk,
paged = true;
zc = true;
uarg = msg->msg_ubuf;
+ extra_uref = net_zcopy_get_gift_ref(uarg);
}
} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
uarg = msg_zerocopy_realloc(sk, length, skb_zcopy(skb));
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index 6d4f01a0cf6e..8d8a8bbdb8df 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -1557,6 +1557,7 @@ static int __ip6_append_data(struct sock *sk,
paged = true;
zc = true;
uarg = msg->msg_ubuf;
+ extra_uref = net_zcopy_get_gift_ref(uarg);
}
} else if (sock_flag(sk, SOCK_ZEROCOPY)) {
uarg = msg_zerocopy_realloc(sk, length, skb_zcopy(skb));
--
2.37.0
Powered by blists - more mailing lists