lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5af52ab4-237f-8646-76e4-5e24236d9b4a@drivenets.com>
Date:   Mon, 26 Apr 2021 20:09:13 +0300
From:   Leonard Crestez <lcrestez@...venets.com>
To:     Neal Cardwell <ncardwell@...gle.com>,
        Matt Mathis <mattmathis@...gle.com>
Cc:     Willem de Bruijn <willemb@...gle.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Roopa Prabhu <roopa@...ulusnetworks.com>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Yuchung Cheng <ycheng@...gle.com>,
        John Heffner <johnwheffner@...il.com>
Subject: Re: [RFC] tcp: Delay sending non-probes for RFC4821 mtu probing

On 26.04.2021 18:59, Neal Cardwell wrote:
> On Sun, Apr 25, 2021 at 10:34 PM Leonard Crestez <lcrestez@...venets.com> wrote:
>> On 4/21/21 3:47 PM, Neal Cardwell wrote:
>>> On Wed, Apr 21, 2021 at 6:21 AM Leonard Crestez <cdleonard@...il.com> wrote:

>>> If the goal is to increase the frequency of PMTU probes, which seems
>>> like a valid goal, I would suggest that we rethink the Linux heuristic
>>> for triggering PMTU probes in the light of the fact that the loss
>>> detection mechanism is now RACK-TLP, which provides quick recovery in
>>> a much wider variety of scenarios.
>>
>>> You mention:
>>>> Linux waits for probe_size + (1 + retries) * mss_cache to be available
>>>
>>> The code in question seems to be:
>>>
>>>     size_needed = probe_size + (tp->reordering + 1) * tp->mss_cache;
>>> How about just changing this to:
>>>
>>>     size_needed = probe_size + tp->mss_cache;
>>>
>>> The rationale would be that if that amount of data is available, then
>>> the sender can send one probe and one following current-mss-size
>>> packet. If the path MTU has not increased to allow the probe of size
>>> probe_size to pass through the network, then the following
>>> current-mss-size packet will likely pass through the network, generate
>>> a SACK, and trigger a RACK fast recovery 1/4*min_rtt later, when the
>>> RACK reorder timer fires.
>>
>> This appears to almost work except it stalls after a while. I spend some
>> time investigating it and it seems that cwnd is shrunk on mss increases
>> and does not go back up. This causes probes to be skipped because of a
>> "snd_cwnd < 11" condition.
>>
>> I don't undestand where that magical "11" comes from, could that be
>> shrunk. Maybe it's meant to only send probes when the cwnd is above the
>> default of 10? Then maybe mtu_probe_success shouldn't shrink mss below
>> what is required for an additional probe, or at least round-up.
>>
>> The shrinkage of cwnd is a problem with this "short probes" approach
>> because tcp_is_cwnd_limited returns false because tp->max_packets_out is
>> smaller (4). With longer probes tp->max_packets_out is larger (6) so
>> tcp_is_cwnd_limited returns true even for a cwnd of 10.
>>
>> I'm testing using namespace-to-namespace loopback so my delays are close
>> to zero. I tried to introduce an artificial delay of 30ms (using tc
>> netem) and it works but 20ms does not.
> 
> I agree the magic 11 seems outdated and unnecessarily high, given RACK-TLP.
> 
> I think it would be fine to change the magic 11 to a magic
> (TCP_FASTRETRANS_THRESH+1), aka 3+1=4:
> 
>    - tp->snd_cwnd < 11 ||
>    + p->snd_cwnd < (TCP_FASTRETRANS_THRESH + 1) ||
> 
> As long as the cwnd is >= TCP_FASTRETRANS_THRESH+1 then the sender
> should usually be able to send the 1 probe packet and then 3
> additional packets beyond the probe, and in the common case (with no
> reordering) then with failed probes this should allow the sender to
> quickly receive 3 SACKed segments and enter fast recovery quickly.
> Even if the sender doesn't have 3 additional packets, or if reordering
> has been detected, then RACK-TLP should be able to start recovery
> quickly (5/4*RTT if there is at least one SACK, or 2*RTT for a TLP if
> there is no SACK).

As far as I understand tp->reordering is a dynamic evaluation of the 
fastretrans threshold to deal with environments with lots of reordering. 
Your suggestion seems equivalent to the current size_needed calculation 
except using packets instead of bytes.

Wouldn't it be easier to drop the "11" check and just verify that 
size_needed fits into cwnd as bytes?

--
Regards,
Leonard

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ