[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20181121.155051.1200147906934774148.davem@davemloft.net>
Date: Wed, 21 Nov 2018 15:50:51 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: edumazet@...gle.com
Cc: netdev@...r.kernel.org, jean-louis@...ond.be, ncardwell@...gle.com,
ycheng@...gle.com, eric.dumazet@...il.com
Subject: Re: [PATCH net] tcp: defer SACK compression after DupThresh
From: Eric Dumazet <edumazet@...gle.com>
Date: Tue, 20 Nov 2018 05:53:59 -0800
> Jean-Louis reported a TCP regression and bisected to recent SACK
> compression.
>
> After a loss episode (receiver not able to keep up and dropping
> packets because its backlog is full), linux TCP stack is sending
> a single SACK (DUPACK).
>
> Sender waits a full RTO timer before recovering losses.
>
> While RFC 6675 says in section 5, "Algorithm Details",
>
> (2) If DupAcks < DupThresh but IsLost (HighACK + 1) returns true --
> indicating at least three segments have arrived above the current
> cumulative acknowledgment point, which is taken to indicate loss
> -- go to step (4).
> ...
> (4) Invoke fast retransmit and enter loss recovery as follows:
>
> there are old TCP stacks not implementing this strategy, and
> still counting the dupacks before starting fast retransmit.
>
> While these stacks probably perform poorly when receivers implement
> LRO/GRO, we should be a little more gentle to them.
>
> This patch makes sure we do not enable SACK compression unless
> 3 dupacks have been sent since last rcv_nxt update.
>
> Ideally we should even rearm the timer to send one or two
> more DUPACK if no more packets are coming, but that will
> be work aiming for linux-4.21.
>
> Many thanks to Jean-Louis for bisecting the issue, providing
> packet captures and testing this patch.
>
> Fixes: 5d9f4262b7ea ("tcp: add SACK compression")
> Reported-by: Jean-Louis Dupond <jean-louis@...ond.be>
> Tested-by: Jean-Louis Dupond <jean-louis@...ond.be>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Acked-by: Neal Cardwell <ncardwell@...gle.com>
Applied and queued up for -stable.
Thanks Eric.
Powered by blists - more mailing lists