[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1578512578.423430354@decadent.org.uk>
Date: Wed, 08 Jan 2020 19:43:18 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, Denis Kirjanov <kda@...ux-powerpc.org>,
"Arnd Bergmann" <arnd@...db.de>,
"Yuchung Cheng" <ycheng@...gle.com>,
"Eric Dumazet" <edumazet@...gle.com>,
"Neal Cardwell" <ncardwell@...gle.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 3.16 20/63] tcp/dccp: drop SYN packets if accept queue is full
3.16.81-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet <edumazet@...gle.com>
commit 5ea8ea2cb7f1d0db15762c9b0bb9e7330425a071 upstream.
Per listen(fd, backlog) rules, there is really no point accepting a SYN,
sending a SYNACK, and dropping the following ACK packet if accept queue
is full, because application is not draining accept queue fast enough.
This behavior is fooling TCP clients that believe they established a
flow, while there is nothing at server side. They might then send about
10 MSS (if using IW10) that will be dropped anyway while server is under
stress.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Acked-by: Neal Cardwell <ncardwell@...gle.com>
Acked-by: Yuchung Cheng <ycheng@...gle.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Arnd Bergmann <arnd@...db.de>
[bwh: Backported to 3.16: Apply TCP changes in both tcp_ipv4.c and tcp_ipv6.c]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -287,11 +287,6 @@ static inline int inet_csk_reqsk_queue_l
return reqsk_queue_len(&inet_csk(sk)->icsk_accept_queue);
}
-static inline int inet_csk_reqsk_queue_young(const struct sock *sk)
-{
- return reqsk_queue_len_young(&inet_csk(sk)->icsk_accept_queue);
-}
-
static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
{
return reqsk_queue_is_full(&inet_csk(sk)->icsk_accept_queue);
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -621,13 +621,7 @@ int dccp_v4_conn_request(struct sock *sk
if (inet_csk_reqsk_queue_is_full(sk))
goto drop;
- /*
- * Accept backlog is full. If we have already queued enough
- * of warm entries in syn queue, drop request. It is better than
- * clogging syn queue with openreqs with exponentially increasing
- * timeout.
- */
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk))
goto drop;
req = inet_reqsk_alloc(&dccp_request_sock_ops);
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -391,7 +391,7 @@ static int dccp_v6_conn_request(struct s
if (inet_csk_reqsk_queue_is_full(sk))
goto drop;
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk))
goto drop;
req = inet6_reqsk_alloc(&dccp6_request_sock_ops);
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1292,12 +1292,7 @@ int tcp_v4_conn_request(struct sock *sk,
goto drop;
}
- /* Accept backlog is full. If we have already queued enough
- * of warm entries in syn queue, drop request. It is better than
- * clogging syn queue with openreqs with exponentially increasing
- * timeout.
- */
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ if (sk_acceptq_is_full(sk)) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
}
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1009,7 +1009,7 @@ static int tcp_v6_conn_request(struct so
goto drop;
}
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ if (sk_acceptq_is_full(sk)) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
}
Powered by blists - more mailing lists