lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e0e3738e-c01f-4c2e-a782-3e7c99d8b647@uni-osnabrueck.de>
Date: Thu, 5 Feb 2026 09:29:49 +0100
From: Kathrin Elmenhorst <kelmenhorst@...-osnabrueck.de>
To: Neal Cardwell <ncardwell@...gle.com>
Cc: Eric Dumazet <edumazet@...gle.com>, Kuniyuki Iwashima
 <kuniyu@...gle.com>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net: tcp_bbr: use high pacing gain when the
 sender fails to put enough data inflight

On 2/3/26 15:29, Neal Cardwell wrote:

> The "or is reset to" part is an incorrect reading of the code; BBR
> does not reset itself to STARTUP when the connection is app-limited.
> :-)
> Please note that in bbr_cwnd_event() when an app-limited connection in
> BBR_PROBE_BW restarts from idle, BBR sets the pacing rate to 1.0x the
> estimated bandwidth. This is the common case for long-lived BBR
> connections with application-limited behavior. Your proposed patch
> makes the behavior much more aggressive in this case.

Got you, so the high pacing gain for app-limited sockets only applies
at the start, before BBR thinks that the full bandwidth has been
reached for the first time.

> Related questions would be: for VMs using paced CUBIC (CUBIC with fq
> qdisc): (a) how much does paced CUBIC suffer on  low-CPU-budget VMs?
> (b) how much does this alternate len_ns computation help paced CUBIC
> on such VMs?
Regarding (a), we did not test this as thoroughly as BBR, but we saw
that paced CUBIC can have similar issues under CPU contention, but the
effect was smaller than with BBR, especially for shorter off-CPU
times. As in, CUBIC+fq only starts slowing down under more severe CPU
contention conditions, especially when off-CPU times surpass the RTT.
I suspect this is because the performance degradation not only depends
on the rate limit given by the pacing and the available CPU-time per
RTT but also on the reaction of the CCA to less-than-expected delivery
rates and added delay - CUBIC is less sensitive regarding the latter.
In any case, I think it's a good idea to solve this problem for paced
TCP generally, not just for BBR.

> I suspect we want something that builds  on the following patches by Eric:
Thanks! I will check out the patches and test them with CUBIC and fq
on our setup.


Thank you for the feedback!
Kathrin


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ