[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200602080425.93712-1-kerneljasonxing@gmail.com>
Date: Tue, 2 Jun 2020 16:04:25 +0800
From: kerneljasonxing@...il.com
To: edumazet@...gle.com, davem@...emloft.net, kuznet@....inr.ac.ru,
yoshfuji@...ux-ipv6.org
Cc: netdev@...r.kernel.org, kerneljasonxing@...il.com,
linux-kernel@...r.kernel.org, liweishi@...ishou.com,
lishujin@...ishou.com
Subject: [PATCH] tcp: fix TCP socks unreleased in BBR mode
From: Jason Xing <kerneljasonxing@...il.com>
TCP socks cannot be released because of the sock_hold() increasing the
sk_refcnt in the manner of tcp_internal_pacing() when RTO happens.
Therefore, this situation could increase the slab memory and then trigger
the OOM if the machine has beening running for a long time. This issue,
however, can happen on some machine only running a few days.
We add one exception case to avoid unneeded use of sock_hold if the
pacing_timer is enqueued.
Reproduce procedure:
0) cat /proc/slabinfo | grep TCP
1) switch net.ipv4.tcp_congestion_control to bbr
2) using wrk tool something like that to send packages
3) using tc to increase the delay in the dev to simulate the busy case.
4) cat /proc/slabinfo | grep TCP
5) kill the wrk command and observe the number of objects and slabs in TCP.
6) at last, you could notice that the number would not decrease.
Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
Signed-off-by: liweishi <liweishi@...ishou.com>
Signed-off-by: Shujin Li <lishujin@...ishou.com>
---
net/ipv4/tcp_output.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index cc4ba42..5cf63d9 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -969,7 +969,8 @@ static void tcp_internal_pacing(struct sock *sk, const struct sk_buff *skb)
u64 len_ns;
u32 rate;
- if (!tcp_needs_internal_pacing(sk))
+ if (!tcp_needs_internal_pacing(sk) ||
+ hrtimer_is_queued(&tcp_sk(sk)->pacing_timer))
return;
rate = sk->sk_pacing_rate;
if (!rate || rate == ~0U)
--
1.8.3.1
Powered by blists - more mailing lists