lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1584524289-24187-26-git-send-email-ilpo.jarvinen@helsinki.fi>
Date:   Wed, 18 Mar 2020 11:38:07 +0200
From:   Ilpo Järvinen <ilpo.jarvinen@...sinki.fi>
To:     netdev@...r.kernel.org
Cc:     Yuchung Cheng <ycheng@...gle.com>,
        Neal Cardwell <ncardwell@...gle.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Olivier Tilmans <olivier.tilmans@...ia-bell-labs.com>
Subject: [RFC PATCH 26/28] tcp: to prevent runaway AccECN cep/ACE deficit, limit GSO size

From: Ilpo Järvinen <ilpo.jarvinen@...helsinki.fi>

It could occur that GSO sends segments in so large blocks
that ACE deficit keeps growing because ACE field can only
update in each super skb.

Put some limit into sending large super skbs in case the ACE
deficit is there and could go on indefinitely. Once the bool
becomes false, it's no longer necessary to recheck it during
further sending.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@...helsinki.fi>
---
 net/ipv4/tcp_output.c | 30 +++++++++++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 0aec2c57a9cc..4de6510532f2 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2124,6 +2124,23 @@ static bool tcp_snd_wnd_test(const struct tcp_sock *tp,
 	return !after(end_seq, tcp_wnd_end(tp));
 }
 
+/* Runaway ACE deficit possible? */
+static bool tcp_accecn_deficit_runaway_test(const struct tcp_sock *tp,
+					    int cwnd_quota)
+{
+	return (tcp_accecn_ace_deficit(tp) >= 2 * TCP_ACCECN_ACE_MAX_DELTA) &&
+	       (cwnd_quota > TCP_ACCECN_ACE_MAX_DELTA - 1);
+}
+
+static u32 tcp_accecn_gso_limit(struct tcp_sock *tp,
+				const struct sk_buff *skb, int cwnd_quota)
+{
+	if (unlikely(tcp_accecn_deficit_runaway_test(tp, cwnd_quota)))
+		return TCP_ACCECN_ACE_MAX_DELTA - 1;
+
+	return 0;
+}
+
 /* Trim TSO SKB to LEN bytes, put the remaining data into a new packet
  * which is put after SKB on the list.  It is very much like
  * tcp_fragment() except that it may make several kinds of assumptions
@@ -2623,6 +2640,8 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
 	int cwnd_quota;
 	int result;
 	bool is_cwnd_limited = false, is_rwnd_limited = false;
+	/* AccECN limit will be lifted below if not needed */
+	bool accecn_gso_limit = tcp_ecn_mode_accecn(tp);
 	u32 max_segs;
 
 	sent_pkts = 0;
@@ -2676,7 +2695,16 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
 						      nonagle : TCP_NAGLE_PUSH))))
 				break;
 		} else {
-			if (!push_one &&
+			if (accecn_gso_limit) {
+				u32 limit = tcp_accecn_gso_limit(tp, skb,
+								 cwnd_quota);
+				if (limit > 0)
+					cwnd_quota = limit;
+				else
+					accecn_gso_limit = false;
+			}
+
+			if (!push_one && !accecn_gso_limit &&
 			    tcp_tso_should_defer(sk, skb, &is_cwnd_limited,
 						 &is_rwnd_limited, max_segs))
 				break;
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ