[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241204172204.4180482-16-dw@davidwei.uk>
Date: Wed, 4 Dec 2024 09:21:54 -0800
From: David Wei <dw@...idwei.uk>
To: io-uring@...r.kernel.org,
netdev@...r.kernel.org
Cc: Jens Axboe <axboe@...nel.dk>,
Pavel Begunkov <asml.silence@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
David Ahern <dsahern@...nel.org>,
Mina Almasry <almasrymina@...gle.com>,
Stanislav Fomichev <stfomichev@...il.com>,
Joe Damato <jdamato@...tly.com>,
Pedro Tammela <pctammela@...atatu.com>
Subject: [PATCH net-next v8 15/17] io_uring/zcrx: throttle receive requests
From: Pavel Begunkov <asml.silence@...il.com>
io_zc_rx_tcp_recvmsg() continues until it fails or there is nothing to
receive. If the other side sends fast enough, we might get stuck in
io_zc_rx_tcp_recvmsg() producing more and more CQEs but not letting the
user to handle them leading to unbound latencies.
Break out of it based on an arbitrarily chosen limit, the upper layer
will either return to userspace or requeue the request.
Reviewed-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
Signed-off-by: David Wei <dw@...idwei.uk>
---
io_uring/net.c | 2 ++
io_uring/zcrx.c | 9 +++++++++
2 files changed, 11 insertions(+)
diff --git a/io_uring/net.c b/io_uring/net.c
index f1431317182e..c8d718d7cbe6 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -1266,6 +1266,8 @@ int io_recvzc(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(ret <= 0) && ret != -EAGAIN) {
if (ret == -ERESTARTSYS)
ret = -EINTR;
+ if (ret == IOU_REQUEUE)
+ return IOU_REQUEUE;
req_set_fail(req);
io_req_set_res(req, ret, 0);
diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 8e4b9bfaed99..130583fbe7ca 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -24,10 +24,13 @@
#define IO_RQ_MAX_ENTRIES 32768
+#define IO_SKBS_PER_CALL_LIMIT 20
+
struct io_zcrx_args {
struct io_kiocb *req;
struct io_zcrx_ifq *ifq;
struct socket *sock;
+ unsigned nr_skbs;
};
struct io_zc_refill_data {
@@ -705,6 +708,9 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb,
int i, copy, end, off;
int ret = 0;
+ if (unlikely(args->nr_skbs++ > IO_SKBS_PER_CALL_LIMIT))
+ return -EAGAIN;
+
if (unlikely(offset < skb_headlen(skb))) {
ssize_t copied;
size_t to_copy;
@@ -809,6 +815,9 @@ static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq,
ret = -ENOTCONN;
else
ret = -EAGAIN;
+ } else if (unlikely(args.nr_skbs > IO_SKBS_PER_CALL_LIMIT) &&
+ (issue_flags & IO_URING_F_MULTISHOT)) {
+ ret = IOU_REQUEUE;
} else if (sock_flag(sk, SOCK_DONE)) {
/* Make it to retry until it finally gets 0. */
if (issue_flags & IO_URING_F_MULTISHOT)
--
2.43.5
Powered by blists - more mailing lists