[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK6E8=d+-kA_PyVgAoe0dtGg--ek0CurehQZ7JXo=32em2Gv+Q@mail.gmail.com>
Date: Fri, 21 Jul 2017 14:16:50 -0700
From: Yuchung Cheng <ycheng@...gle.com>
To: Neal Cardwell <ncardwell@...gle.com>
Cc: Lisong Xu <xu@....edu>, Wei Sun <unlcsewsun@...il.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: A buggy behavior for Linux TCP Reno and HTCP
On Fri, Jul 21, 2017 at 1:46 PM, Neal Cardwell <ncardwell@...gle.com> wrote:
> On Fri, Jul 21, 2017 at 4:27 PM, Lisong Xu <xu@....edu> wrote:
>>
>> Hi Yuchung,
>>
>> This test scenario is only one example to trigger this bug. In general, as
>> long as cwnd <4, the undo function has this bug.
>
>
> Yes, personally I agree that this seems like an issue that is general enough
> to be worth fixing. In the sense that, if cwnd <4, then we may well be very
> congested. So we don't want to get hit by this bug wherein an undo of a loss
> recovery can cause cwnd to suddenly jump (from 1, 2, or 3) up to 4.
>
> Seems like any of the several CCs that use tcp_reno_undo_cwnd() have this
> bug.
>
> I guess in my mind the only question is whether we want to add a
> tcp_foo_undo_cwnd() and ca->loss_cwnd to every CC module to handle this
> issue (i.e. make every CC module handle it the way CUBIC does), or (my
I would prefer the former b/c loss_cwnd may not be universal TCP
state, just like ssthresh carries no meaning in some CC (bbr). It also
seems also more consistent with the recent change on undo
commit e97991832a4ea4a5f47d65f068a4c966a2eb5730
Author: Florian Westphal <fw@...len.de>
Date: Mon Nov 21 14:18:38 2016 +0100
tcp: make undo_cwnd mandatory for congestion modules
> preference) just add a tp->loss_cwnd field so we can use shared code in
> tcp_reno_undo_cwnd() to get this right across all CC modules.
>
> neal
>
Powered by blists - more mailing lists