[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK6E8=eP3jwTD5t7Cj3h_m4JTJ61p=roqsr02WiSHUANse1ynw@mail.gmail.com>
Date: Wed, 19 Jul 2017 12:31:57 -0700
From: Yuchung Cheng <ycheng@...gle.com>
To: Wei Sun <unlcsewsun@...il.com>
Cc: netdev <netdev@...r.kernel.org>
Subject: Re: A buggy behavior for Linux TCP Reno and HTCP
On Tue, Jul 18, 2017 at 2:36 PM, Wei Sun <unlcsewsun@...il.com> wrote:
> Hi there,
>
> We find a buggy behavior when using Linux TCP Reno and HTCP in low
> bandwidth or highly congested network environments.
>
> In a simple word, their undo functions may mistakenly double the cwnd,
> leading to a more aggressive behavior in a highly congested scenario.
>
>
> The detailed reason:
>
> The current reno undo function assumes cwnd halving (and thus doubles
> the cwnd), but it doesn't consider a corner case condition that
> ssthresh is at least 2.
>
> e.g.,
> cwnd ssth
> An initial state: 2 5
> A spurious loss: 1 2
> Undo: 4 5
>
> Here the cwnd after undo is two times as that before undo. Attached is
> a simple script to reproduce it.
the packetdrill script is a bit confusing: it disables SACK but then
the client returns ACK w/ SACKs, also 3 dupacks happen after RTO so
the sender isn't technically going through a fast recovery...
could you provide a better test?
>
> A similar reason for HTCP, so we recommend to store the cwnd on loss
> in .ssthresh implementation and restore it again in .undo_cwnd for TCP
> Reno and HTCP implementations.
>
> Thanks
Powered by blists - more mailing lists