lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4cc36c02-d656-5d0c-7313-3a1128213f12@gmail.com>
Date:   Thu, 17 May 2018 21:08:55 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Ryan Mounce <ryan@...nce.com.au>,
        Toke Høiland-Jørgensen 
        <toke@...e.dk>
Cc:     Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
        Cake List <cake@...ts.bufferbloat.net>
Subject: Re: [Cake] [PATCH net-next v12 3/7] sch_cake: Add optional ACK filter



On 05/17/2018 07:36 PM, Ryan Mounce wrote:
> On 17 May 2018 at 22:41, Toke Høiland-Jørgensen <toke@...e.dk> wrote:
>> Eric Dumazet <eric.dumazet@...il.com> writes:
>>
>>> On 05/17/2018 04:23 AM, Toke Høiland-Jørgensen wrote:
>>>
>>>>
>>>> We don't do full parsing of SACKs, no; we were trying to keep things
>>>> simple... We do detect the presence of SACK options, though, and the
>>>> presence of SACK options on an ACK will make previous ACKs be considered
>>>> redundant.
>>>>
>>>
>>> But they are not redundant in some cases, particularly when reorders
>>> happen in the network.
>>
>> Huh. I was under the impression that SACKs were basically cumulative
>> until cleared.
>>
>> I.e., in packet sequence ABCDE where B and D are lost, C would have
>> SACK(B) and E would have SACK(B,D). Are you saying that E would only
>> have SACK(D)?
> 
> SACK works by acknowledging additional ranges above those that have
> been ACKed, rather than ACKing up to the largest seen sequence number
> and reporting missing ranges before that.
> 
> A - ACK(A)
> B - lost
> C - ACK(A) + SACK(C)
> D - lost
> E - ACK(A) + SACK(C, E)
> 
> Cake does check that the ACK sequence number is greater, or if it is
> equal and the 'newer' ACK has the SACK option present. It doesn't
> compare the sequence numbers inside two SACKs. If the two SACKs in the
> above example had been reordered before reaching cake's ACK filter in
> aggressive mode, the wrong one will be filtered.
> 
> This is a limitation of my naive SACK handling in cake. The default
> 'conservative' mode happens to mitigate the problem in the above
> scenario, but the issue could still present itself in more
> pathological cases. It's fixable, however I'm not sure this corner
> case is sufficiently common or severe to warrant the extra complexity.

The extra complexity is absolutely requested for inclusion in upstream linux.

I recommend reading rfc 2018, whole section 4 (Generating Sack Options: Data Receiver Behavior
)

Proposed ACK filter in Cake is messing the protocol, since the first rule is not respected 

* The first SACK block (i.e., the one immediately following the
      kind and length fields in the option) MUST specify the contiguous
      block of data containing the segment which triggered this ACK,
      unless that segment advanced the Acknowledgment Number field in
      the header.  This assures that the ACK with the SACK option
      reflects the most recent change in the data receiver's buffer
      queue.


An ACK filter must either :

Not merge ACK if they contain different SACK blocks.

Or make a precise analysis of the SACK blocks to determine if the merge is allowed,
ie no useful information is lost.

The sender should get all the information as which segments were received correctly,
assuming no ACK are dropped because of congestion on return path.








Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ