lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 May 2021 13:38:24 +0300
From:   Leonard Crestez <cdleonard@...il.com>
To:     Neal Cardwell <ncardwell@...gle.com>,
        Matt Mathis <mattmathis@...gle.com>,
        Eric Dumazet <edumazet@...gle.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Willem de Bruijn <willemb@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        John Heffner <johnwheffner@...il.com>,
        Leonard Crestez <lcrestez@...venets.com>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Roopa Prabhu <roopa@...ulusnetworks.com>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [RFCv2 0/3] tcp: Improve mtu probe preconditions

According to RFC4821 Section 7.4 "Protocols MAY delay sending non-probes
in order to accumulate enough data" but in practice linux only sends
probes when a lot of data accumulates on the send side.

Another improvement is to rely on TCP RACK performing timely loss detection
with fewer outstanding packets. If this is enabled the size required for a
probe can be shrunk.

Successive successful mtu probes will result in reducing the cwnd since
it's measured in packets and we send bigger packets. The cwnd value can get
stuck below 11 on low-latency links and this prevents further probing. The
cwnd logic in tcp_mtu_probe can be reworked to be based on the the number of
packets that we actually need to send instead of arbitrary constants.

It is difficult to improve this behavior without introducing unreasonable
delays or even stalls. Looking at the current behavior of tcp_mtu_probe it
already waits in some scenarios: when there is not enough room inside cwnd
or when there is a gap of unacklowledged data between snd_una and snd_nxt.
It appears that it is safe to wait if packets_in_flight() != 0.

Signed-off-by: Leonard Crestez <cdleonard@...il.com>

---

Previous RFC: https://lore.kernel.org/netdev/cover.1620733594.git.cdleonard@gmail.com/

This series seems to be "correct" this time, I would appreciate any feedback.
It possible my understanding of when it is safe to return 0 from tcp_mtu_probe
is incorrect. It's possible that even current code would interact poorly with
delayed acks in some circumstances?

The tcp_xmit_size_goal changes were dropped. It's still possible to see strange
interactions between tcp_push_one and mtu probing: If the receiver window is
small (60k) the sender will do a "push_one" when half a window is accumulated
(30k) and that would prevent mtu probing even if the sender is writing more
than enough data in a single syscall.

Leonard Crestez (3):
  tcp: Use smaller mtu probes if RACK is enabled
  tcp: Adjust congestion window handling for mtu probe
  tcp: Wait for sufficient data in tcp_mtu_probe

 Documentation/networking/ip-sysctl.rst | 10 ++++
 include/net/netns/ipv4.h               |  2 +
 net/ipv4/sysctl_net_ipv4.c             | 14 ++++++
 net/ipv4/tcp_ipv4.c                    |  2 +
 net/ipv4/tcp_output.c                  | 70 +++++++++++++++++++++-----
 5 files changed, 86 insertions(+), 12 deletions(-)


base-commit: e4e92ee78702b13ad55118d8b66f06e1aef62586
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ