lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJfx3CBJYBS01Mz9z3twjsP3xvSSOamno-cYSSzv3gSxw@mail.gmail.com>
Date: Thu, 16 Jan 2025 16:30:12 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Neal Cardwell <ncardwell@...gle.com>
Cc: Jason Xing <kerneljasonxing@...il.com>, Mahdi Arghavani <ma.arghavani@...oo.com>, 
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>, 
	"haibo.zhang@...go.ac.nz" <haibo.zhang@...go.ac.nz>, 
	"david.eyers@...go.ac.nz" <david.eyers@...go.ac.nz>, "abbas.arghavani@....se" <abbas.arghavani@....se>
Subject: Re: [PATCH net v2] tcp_cubic: fix incorrect HyStart round start detection

On Thu, Jan 16, 2025 at 3:42 PM Neal Cardwell <ncardwell@...gle.com> wrote:
>
> On Thu, Jan 16, 2025 at 6:40 AM Jason Xing <kerneljasonxing@...il.com> wrote:
> >
> > On Thu, Jan 16, 2025 at 5:49 PM Mahdi Arghavani <ma.arghavani@...oo.com> wrote:
> > >
> > > Hi Jason,
> > >
> > > I will explain this using a test conducted on my local testbed. Imagine a client and a server connected through two Linux software routers. In this setup, the minimum RTT is 150 ms, the bottleneck bandwidth is 50 Mbps, and the bottleneck buffer size is 1 BDP, calculated as (50M / 1514 / 8) * 0.150 = 619 packets.
> > >
> > > I conducted the test twice, transferring data from the server to the client for 1.5 seconds:
> > >
> > > TEST 1) With the patch applied: HyStart stopped the exponential growth of cwnd when cwnd = 632 and the bottleneck link was saturated (632 > 619).
> > >
> > >
> > > TEST 2) Without the patch applied: HyStart stopped the exponential growth of cwnd when cwnd = 516 and the bottleneck link was not yet saturated (516 < 619). This resulted in 300 KB less delivered data compared to the first test.
> >
> > Thanks for sharing these numbers. I would suggest in the v3 adding the
> > above description in the commit message. No need to send v3 until the
> > maintainers of TCP (Eric and Neal) give further suggestions :)
> >
> > Feel free to add my reviewed-by tag in the next version:
> > Reviewed-by: Jason Xing <kerneljasonxing@...il.com>
> >
> > Thanks,
> > Jason
>
> Mahdi, a few quick questions about your test logs, beforePatch.log and
> afterPatch.log:
>
> + What is moRTT? Is that ca->curr_rtt? It would be great to share the
> debug patch you used, so we know for certain how to interpret each
> column in the debug output.

+1

Debug patches can alone add delays...



>
> + Are both HYSTART-DELAY and HYSTART-ACK-TRAIN enabled for both of those tests?

I also wonder if cubictcp_cwnd_event( event == CA_EVENT_TX_START)
should also call bictcp_hystart_reset()

cubic was not really expecting app_limited mode.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ