lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240817163400.2616134-1-mrzhang97@gmail.com>
Date: Sat, 17 Aug 2024 11:33:57 -0500
From: Mingrui Zhang <mrzhang97@...il.com>
To: edumazet@...gle.com,
	davem@...emloft.net,
	ncardwell@...gle.com,
	netdev@...r.kernel.org
Cc: Mingrui Zhang <mrzhang97@...il.com>,
	Lisong Xu <xu@....edu>
Subject: [PATCH net v4 0/3] tcp_cubic: fix to achieve at least the same throughput as Reno

This series patches fixes some CUBIC bugs so that "CUBIC achieves at least
the same throughput as Reno in small-BDP networks"
[RFC 9438: https://www.rfc-editor.org/rfc/rfc9438.html]

It consists of three bug fixes, all changing function bictcp_update()
of tcp_cubic.c, which controls how fast CUBIC increases its
congestion window size snd_cwnd.

(1) tcp_cubic: fix to run bictcp_update() at least once per RTT
(2) tcp_cubic: fix to match Reno additive increment
(3) tcp_cubic: fix to use emulated Reno cwnd one RTT in the future

Experiments:

Below are Mininet experiments to demonstrate the performance difference
between the original CUBIC and patched CUBIC.

Network: link capacity = 100Mbps, RTT = 4ms

TCP flows: one RENO and one CUBIC. initial cwnd = 10 packets.
The first data packet of each flow is lost

snd_cwnd of RENO and original CUBIC flows
https://github.com/zmrui/tcp_cubic_fix/blob/main/renocubic_fixb0.jpg

snd_cwnd of RENO and patched CUBIC (with bug fixes 1, 2, and 3) flows.
https://github.com/zmrui/tcp_cubic_fix/blob/main/renocubic_fixb1b2b3.jpg

The result of patched CUBIC with different combinations of
bug fixes 1, 2, and 3 can be found at the following link,
where you can also find more experiment results.

https://github.com/zmrui/tcp_cubic_fix

Thanks
Mingrui, and Lisong

Changes:
  v3->v4:
    replace min() with min_t()
    separate declarations and code of tcp_cwnd_next_rtt
    https://lore.kernel.org/netdev/20240815214035.1145228-1-mrzhang97@gmail.com/
  v2->v3: 
    Correct the "Fixes:" footer content
    https://lore.kernel.org/netdev/20240815001718.2845791-1-mrzhang97@gmail.com/
  v1->v2: 
    Separate patches
    Add new cwnd_prior field to hold cwnd before a loss event
    https://lore.kernel.org/netdev/20240810223130.379146-1-mrzhang97@gmail.com/


Signed-off-by: Mingrui Zhang <mrzhang97@...il.com>
Signed-off-by: Lisong Xu <xu@....edu>

Mingrui Zhang (3):
  tcp_cubic: fix to run bictcp_update() at least once per RTT
  tcp_cubic: fix to match Reno additive increment
  tcp_cubic: fix to use emulated Reno cwnd one RTT in the future

 net/ipv4/tcp_cubic.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ