[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241202143057.378147-28-dhowells@redhat.com>
Date: Mon, 2 Dec 2024 14:30:45 +0000
From: David Howells <dhowells@...hat.com>
To: netdev@...r.kernel.org
Cc: David Howells <dhowells@...hat.com>,
Marc Dionne <marc.dionne@...istor.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
linux-afs@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Simon Wilkinson <sxw@...istor.com>
Subject: [PATCH net-next 27/37] rxrpc: Fix the calculation and use of RTO
Make the following changes to the calculation and use of RTO:
(1) Fix rxrpc_resend() to use the backed-off RTO value obtained by calling
rxrpc_get_rto_backoff() rather than extracting the value itself.
Without this, it may retransmit packets too early.
(2) The RTO value being similar to the RTT causes a lot of extraneous
resends because the RTT doesn't end up taking account of clearing out
of the receive queue on the server. Worse, responses to PING-ACKs are
made as fast as possible and so are less than the DATA-requested-ACK
RTT and so skew the RTT down.
Fix this by putting a lower bound on the RTO by adding 100ms to it and
limiting the lower end to 200ms.
Fixes: c410bf01933e ("rxrpc: Fix the excessive initial retransmission timeout")
Fixes: 37473e416234 ("rxrpc: Clean up the resend algorithm")
Signed-off-by: David Howells <dhowells@...hat.com>
Suggested-by: Simon Wilkinson <sxw@...istor.com>
cc: Marc Dionne <marc.dionne@...istor.com>
cc: "David S. Miller" <davem@...emloft.net>
cc: Eric Dumazet <edumazet@...gle.com>
cc: Jakub Kicinski <kuba@...nel.org>
cc: Paolo Abeni <pabeni@...hat.com>
cc: linux-afs@...ts.infradead.org
cc: netdev@...r.kernel.org
---
net/rxrpc/call_event.c | 3 ++-
net/rxrpc/rtt.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 48bc06842f99..1f9b1964e142 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -103,7 +103,8 @@ void rxrpc_resend(struct rxrpc_call *call, rxrpc_serial_t ack_serial, bool ping_
.now = ktime_get_real(),
};
struct rxrpc_txqueue *tq = call->tx_queue;
- ktime_t lowest_xmit_ts = KTIME_MAX, rto = ns_to_ktime(call->peer->rto_us * NSEC_PER_USEC);
+ ktime_t lowest_xmit_ts = KTIME_MAX;
+ ktime_t rto = rxrpc_get_rto_backoff(call->peer, false);
bool unacked = false;
_enter("{%d,%d}", call->tx_bottom, call->tx_top);
diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c
index e0b7d99854b4..3f1ec8e420a6 100644
--- a/net/rxrpc/rtt.c
+++ b/net/rxrpc/rtt.c
@@ -27,7 +27,7 @@ static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer)
static u32 rxrpc_bound_rto(u32 rto)
{
- return umin(rto, RXRPC_RTO_MAX);
+ return clamp(200000, rto + 100000, RXRPC_RTO_MAX);
}
/*
Powered by blists - more mailing lists