[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <3386149.1676497685@warthog.procyon.org.uk>
Date: Wed, 15 Feb 2023 21:48:05 +0000
From: David Howells <dhowells@...hat.com>
To: netdev@...r.kernel.org
Cc: David Howells <dhowells@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Marc Dionne <marc.dionne@...istor.com>,
linux-afs@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [PATCH net-next] rxrpc: Fix overproduction of wakeups to recvmsg()
Fix three cases of overproduction of wakeups:
(1) rxrpc_input_split_jumbo() conditionally notifies the app that there's
data for recvmsg() to collect if it queues some data - and then its
only caller, rxrpc_input_data(), goes and wakes up recvmsg() anyway.
Fix the rxrpc_input_data() to only do the wakeup in failure cases.
(2) If a DATA packet is received for a call by the I/O thread whilst
recvmsg() is busy draining the call's rx queue in the app thread, the
call will left on the recvmsg() queue for recvmsg() to pick up, even
though there isn't any data on it.
This can cause an unexpected recvmsg() with a 0 return and no MSG_EOR
set after the reply has been posted to a service call.
Fix this by discarding pending calls from the recvmsg() queue that
don't need servicing yet.
(3) Not-yet-completed calls get requeued after having data read from them,
even if they have no data to read.
Fix this by only requeuing them if they have data waiting on them; if
they don't, the I/O thread will requeue them when data arrives or they
fail.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Marc Dionne <marc.dionne@...istor.com>
cc: linux-afs@...ts.infradead.org
---
include/trace/events/rxrpc.h | 1 +
net/rxrpc/input.c | 2 +-
net/rxrpc/recvmsg.c | 16 +++++++++++++++-
3 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index c3c0b0aa8381..4c53a5ef6257 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -318,6 +318,7 @@
EM(rxrpc_recvmsg_return, "RETN") \
EM(rxrpc_recvmsg_terminal, "TERM") \
EM(rxrpc_recvmsg_to_be_accepted, "TBAC") \
+ EM(rxrpc_recvmsg_unqueue, "UNQU") \
E_(rxrpc_recvmsg_wait, "WAIT")
#define rxrpc_rtt_tx_traces \
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index d68848fce51f..030d64f282f3 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -606,7 +606,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
rxrpc_proto_abort(call, sp->hdr.seq, rxrpc_badmsg_bad_jumbo);
goto out_notify;
}
- skb = NULL;
+ return;
out_notify:
trace_rxrpc_notify_socket(call->debug_id, serial);
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index 76eb2b9cd936..a482f88c5fc5 100644
--- a/net/rxrpc/recvmsg.c
+++ b/net/rxrpc/recvmsg.c
@@ -334,10 +334,23 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
/* Find the next call and dequeue it if we're not just peeking. If we
* do dequeue it, that comes with a ref that we will need to release.
+ * We also want to weed out calls that got requeued whilst we were
+ * shovelling data out.
*/
spin_lock(&rx->recvmsg_lock);
l = rx->recvmsg_q.next;
call = list_entry(l, struct rxrpc_call, recvmsg_link);
+
+ if (!rxrpc_call_is_complete(call) &&
+ skb_queue_empty(&call->recvmsg_queue)) {
+ list_del_init(&call->recvmsg_link);
+ spin_unlock(&rx->recvmsg_lock);
+ release_sock(&rx->sk);
+ trace_rxrpc_recvmsg(call->debug_id, rxrpc_recvmsg_unqueue, 0);
+ rxrpc_put_call(call, rxrpc_call_put_recvmsg);
+ goto try_again;
+ }
+
if (!(flags & MSG_PEEK))
list_del_init(&call->recvmsg_link);
else
@@ -402,7 +415,8 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
if (rxrpc_call_has_failed(call))
goto call_failed;
- rxrpc_notify_socket(call);
+ if (!skb_queue_empty(&call->recvmsg_queue))
+ rxrpc_notify_socket(call);
goto not_yet_complete;
call_failed:
Powered by blists - more mailing lists