[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANdGJ5SOUV1OoonC-3i=5MLUWvoHPBZgp19c3mJTTYbNr2yLeQ@mail.gmail.com>
Date: Thu, 20 Jul 2017 16:28:02 -0500
From: Wei Sun <unlcsewsun@...il.com>
To: Yuchung Cheng <ycheng@...gle.com>
Cc: netdev <netdev@...r.kernel.org>
Subject: Re: A buggy behavior for Linux TCP Reno and HTCP
Hi Yuchung,
Sorry for the confusion. The test case was adapted from an old DSACK
test case (i.e., forget to remove something).
Attached is a new and simple one. Thanks
On Wed, Jul 19, 2017 at 2:31 PM, Yuchung Cheng <ycheng@...gle.com> wrote:
> On Tue, Jul 18, 2017 at 2:36 PM, Wei Sun <unlcsewsun@...il.com> wrote:
>> Hi there,
>>
>> We find a buggy behavior when using Linux TCP Reno and HTCP in low
>> bandwidth or highly congested network environments.
>>
>> In a simple word, their undo functions may mistakenly double the cwnd,
>> leading to a more aggressive behavior in a highly congested scenario.
>>
>>
>> The detailed reason:
>>
>> The current reno undo function assumes cwnd halving (and thus doubles
>> the cwnd), but it doesn't consider a corner case condition that
>> ssthresh is at least 2.
>>
>> e.g.,
>> cwnd ssth
>> An initial state: 2 5
>> A spurious loss: 1 2
>> Undo: 4 5
>>
>> Here the cwnd after undo is two times as that before undo. Attached is
>> a simple script to reproduce it.
> the packetdrill script is a bit confusing: it disables SACK but then
> the client returns ACK w/ SACKs, also 3 dupacks happen after RTO so
> the sender isn't technically going through a fast recovery...
>
> could you provide a better test?
>
>>
>> A similar reason for HTCP, so we recommend to store the cwnd on loss
>> in .ssthresh implementation and restore it again in .undo_cwnd for TCP
>> Reno and HTCP implementations.
>>
>> Thanks
Download attachment "TSundo-2-1-4.pkt" of type "application/octet-stream" (1758 bytes)
Powered by blists - more mailing lists