[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201208091910.37618-1-cambda@linux.alibaba.com>
Date: Tue, 8 Dec 2020 17:19:10 +0800
From: Cambda Zhu <cambda@...ux.alibaba.com>
To: netdev <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>
Cc: Dust Li <dust.li@...ux.alibaba.com>,
Tony Lu <tonylu@...ux.alibaba.com>,
Cambda Zhu <cambda@...ux.alibaba.com>
Subject: [PATCH net-next] net: Limit logical shift left of TCP probe0 timeout
For each TCP zero window probe, the icsk_backoff is increased by one and
its max value is tcp_retries2. If tcp_retries2 is greater than 63, the
probe0 timeout shift may exceed its max bits. On x86_64/ARMv8/MIPS, the
shift count would be masked to range 0 to 63. And on ARMv7 the result is
zero. If the shift count is masked, only several probes will be sent
with timeout shorter than TCP_RTO_MAX. But if the timeout is zero, it
needs tcp_retries2 times probes to end this false timeout. Besides,
bitwise shift greater than or equal to the width is an undefined
behavior.
This patch adds a limit to the backoff. The max value of max_when is
TCP_RTO_MAX and the min value of timeout base is TCP_RTO_MIN. The limit
is the backoff from TCP_RTO_MIN to TCP_RTO_MAX.
Signed-off-by: Cambda Zhu <cambda@...ux.alibaba.com>
---
include/net/tcp.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index d4ef5bf94168..82044179c345 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1321,7 +1321,9 @@ static inline unsigned long tcp_probe0_base(const struct sock *sk)
static inline unsigned long tcp_probe0_when(const struct sock *sk,
unsigned long max_when)
{
- u64 when = (u64)tcp_probe0_base(sk) << inet_csk(sk)->icsk_backoff;
+ u8 backoff = min_t(u8, ilog2(TCP_RTO_MAX / TCP_RTO_MIN) + 1,
+ inet_csk(sk)->icsk_backoff);
+ u64 when = (u64)tcp_probe0_base(sk) << backoff;
return (unsigned long)min_t(u64, when, max_when);
}
--
2.16.6
Powered by blists - more mailing lists