lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140616211954.6E12BA3A89@unicorn.suse.cz>
Date:	Mon, 16 Jun 2014 23:19:54 +0200 (CEST)
From:	Michal Kubecek <mkubecek@...e.cz>
To:	"David S. Miller" <davem@...emloft.net>
Cc:	netdev@...r.kernel.org, Yuchung Cheng <ycheng@...gle.com>,
	Alexey Kuznetsov <kuznet@....inr.ac.ru>,
	James Morris <jmorris@...ei.org>,
	Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
	Patrick McHardy <kaber@...sh.net>
Subject: [PATCH net] tcp: avoid multiple ssthresh reductions in on retransmit
 window

RFC 5681 says that ssthresh reduction in response to RTO should
be done only once and should not be repeated until all packets
from the first loss are retransmitted. RFC 6582 (as well as its
predecessor RFC 3782) is even more specific and says that when
loss is detected, one should mark current SND.NXT and ssthresh
shouldn't be reduced again due to a loss until SND.UNA reaches
this remembered value.

In Linux implementation, this is done in tcp_enter_loss() but an
additional condition

  (icsk->icsk_ca_state == TCP_CA_Loss && !icsk->icsk_retransmits)

allows to further reduce ssthresh before snd_una reaches the
high_seq (the snd_nxt value at the previous loss) as
icsk_retransmits is reset as soon as snd_una moves forward. As a
result, if a retransmit timeout ouccurs early in the retransmit
phase, we can adjust snd_ssthresh based on very low value of
cwnd. This can be especially harmful for reno congestion control
with slow linear cwnd growth in congestion avoidance phase.

The patch removes the condition above so that snd_ssthresh is
not reduced again until snd_una reaches high_seq as described in
RFC 5681 and 6582.

Signed-off-by: Michal Kubecek <mkubecek@...e.cz>
---
 net/ipv4/tcp_input.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 40661fc..768ba88 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1917,8 +1917,7 @@ void tcp_enter_loss(struct sock *sk, int how)
 
 	/* Reduce ssthresh if it has not yet been made inside this window. */
 	if (icsk->icsk_ca_state <= TCP_CA_Disorder ||
-	    !after(tp->high_seq, tp->snd_una) ||
-	    (icsk->icsk_ca_state == TCP_CA_Loss && !icsk->icsk_retransmits)) {
+	    !after(tp->high_seq, tp->snd_una)) {
 		new_recovery = true;
 		tp->prior_ssthresh = tcp_current_ssthresh(sk);
 		tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk);
-- 
1.8.4.5

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ