lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 20 Jul 2020 09:54:53 +0800
From:   hujunwei <>
To:     Neal Cardwell <>
CC:     Eric Dumazet <>,
        David Miller <>,
        Alexey Kuznetsov <>,
        Hideaki YOSHIFUJI <>,
        Jakub Kicinski <>,
        Netdev <>, <>,
        <>, <>,
        <>, Yuchung Cheng <>,
        Ilpo Jarvinen <>
Subject: Re: [PATCH net-next] tcp: Optimize the recovery of tcp when lack of

On 2020/7/19 11:29, Neal Cardwell wrote:
> On Sat, Jul 18, 2020 at 6:43 AM hujunwei <> wrote:
>> On 2020/7/17 22:44, Neal Cardwell wrote:
>>> On Fri, Jul 17, 2020 at 7:43 AM hujunwei <> wrote:
>>>> From: Junwei Hu <>
>>>> In the document of RFC2582(
>>>> introduced two separate scenarios for tcp congestion control:
>>> Can you please elaborate on how the sender is able to distinguish
>>> between the two scenarios, after your patch?
>>> It seems to me that with this proposed patch, there is the risk of
>>> spurious fast recoveries due to 3 dupacks in the second second
>>> scenario (the sender unnecessarily retransmitted three packets below
>>> "send_high"). Can you please post a packetdrill test to demonstrate
>>> that with this patch the TCP sender does not spuriously enter fast
>>> recovery in such a scenario?
>> Hi neal,
>> Thanks for you quick reply!
>> What I want to says is when these three numbers: snd_una, high_seq and
>> snd_nxt are the same, that means all data outstanding
>> when the Loss state began have successfully been acknowledged.
> Yes, that seems true.
>> So the sender is time to exits to the Open state.
>> I'm not sure whether my understanding is correct.
> I don't think we want the sender to exit to the CA_Open state in these
> circumstances. I think section 5 ("Avoiding Multiple Fast
> Retransmits") of RFC 2582 states convincingly that senders should take
> steps to avoid having duplicate acknowledgements at high_seq trigger a
> new fast recovery. The Linux TCP implements those steps by *not*
> exiting to the Open state, and instead staying in CA_Loss or
> CA_RecoverThanks neal,
After reading the section 5 of RFC 2582, I think you are right.

> To make things more concrete, here is the kind of timeline/scenario I
> am concerned about with your proposed patch. I have not had cycles to
> cook a packetdrill test like this, but this is the basic idea:
> [connection does not have SACK or TCP timestamps enabled]
> app writes 4*SMSS
> Send packets P1, P2, P3, P4
> TLP, spurious retransmit of P4
> spurious RTO, set cwnd to 1, enter CA_Loss, retransmit P1
> receive ACK for P1 (original copy)
> slow-start, increase cwnd to 2, retransmit P2, P3
> receive ACK for P2 (original copy)
> slow-start, increase cwnd to 3, retransmit P4
> receive ACK for P3 (original copy)
> slow-start, increase cwnd to 4
> receive ACK for P4 (original copy)
> slow-start, increase cwnd to 5
> [with your patch, at this point the sender does not meet the
>  conditions for "Hold old state until something *above* high_seq is ACKed.",
>  so sender exits CA_Loss and enters Open]
> app writes 4*MSS
> send P5, P6, P7, P8
> receive dupack for P4 (due to spurious TLP retransmit of P4)
> receive dupack for P4 (due to spurious CA_Loss retransmit of P1)
> receive dupack for P4 (due to spurious CA_Loss retransmit of P2)
> [with your patch, at this point we risk spuriously entering
>  fast recovery because we have  received 3 duplicate ACKs for P4]
> A packetdrill test that shows that this is not the behavior of your
> proposed patch would help support your proposed patch (presuming > is
> replaced by after()).
> best,
> neal

Thank you for your detailed answer, I felt that I learned a lot.

Regards Junwei

Powered by blists - more mailing lists