lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240826092707.2661435-1-edumazet@google.com>
Date: Mon, 26 Aug 2024 09:27:07 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>
Cc: Neal Cardwell <ncardwell@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com, 
	Eric Dumazet <edumazet@...gle.com>, Mingrui Zhang <mrzhang97@...il.com>, Lisong Xu <xu@....edu>
Subject: [PATCH net] tcp_cubic: switch ca->last_time to usec resolution

bictcp_update() uses ca->last_time as a timestamp
to decide of several heuristics.

Historically this timestamp has been fed with jiffies,
which has too coarse resolution, some distros are
still using CONFIG_HZ_250=y

It is time to switch to usec resolution, now TCP stack
already caches in tp->tcp_mstamp the high resolution time.

Also remove the 'inline' qualifier, this helper is used
once and compilers are smarts.

Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Link: https://lore.kernel.org/netdev/20240817163400.2616134-1-mrzhang97@gmail.com/T/#mb6a64c9e2309eb98eaeeeb4b085c4a2270b6789d
Cc: Mingrui Zhang <mrzhang97@...il.com>
Cc: Lisong Xu <xu@....edu>
---
 net/ipv4/tcp_cubic.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
index 5dbed91c6178257df8d2ccd1c8690a10bdbaf56a..3b1845103ee1866a316926a130c212e6f5e78ef0 100644
--- a/net/ipv4/tcp_cubic.c
+++ b/net/ipv4/tcp_cubic.c
@@ -87,7 +87,7 @@ struct bictcp {
 	u32	cnt;		/* increase cwnd by 1 after ACKs */
 	u32	last_max_cwnd;	/* last maximum snd_cwnd */
 	u32	last_cwnd;	/* the last snd_cwnd */
-	u32	last_time;	/* time when updated last_cwnd */
+	u32	last_time;	/* time when updated last_cwnd (usec) */
 	u32	bic_origin_point;/* origin point of bic function */
 	u32	bic_K;		/* time to origin point
 				   from the beginning of the current epoch */
@@ -211,26 +211,28 @@ static u32 cubic_root(u64 a)
 /*
  * Compute congestion window to use.
  */
-static inline void bictcp_update(struct bictcp *ca, u32 cwnd, u32 acked)
+static void bictcp_update(struct sock *sk, u32 cwnd, u32 acked)
 {
+	const struct tcp_sock *tp = tcp_sk(sk);
+	struct bictcp *ca = inet_csk_ca(sk);
 	u32 delta, bic_target, max_cnt;
 	u64 offs, t;
 
 	ca->ack_cnt += acked;	/* count the number of ACKed packets */
 
-	if (ca->last_cwnd == cwnd &&
-	    (s32)(tcp_jiffies32 - ca->last_time) <= HZ / 32)
+	delta = tp->tcp_mstamp - ca->last_time;
+	if (ca->last_cwnd == cwnd && delta <= USEC_PER_SEC / 32)
 		return;
 
-	/* The CUBIC function can update ca->cnt at most once per jiffy.
+	/* The CUBIC function can update ca->cnt at most once per ms.
 	 * On all cwnd reduction events, ca->epoch_start is set to 0,
 	 * which will force a recalculation of ca->cnt.
 	 */
-	if (ca->epoch_start && tcp_jiffies32 == ca->last_time)
+	if (ca->epoch_start && delta < USEC_PER_MSEC)
 		goto tcp_friendliness;
 
 	ca->last_cwnd = cwnd;
-	ca->last_time = tcp_jiffies32;
+	ca->last_time = tp->tcp_mstamp;
 
 	if (ca->epoch_start == 0) {
 		ca->epoch_start = tcp_jiffies32;	/* record beginning */
@@ -334,7 +336,7 @@ __bpf_kfunc static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
 		if (!acked)
 			return;
 	}
-	bictcp_update(ca, tcp_snd_cwnd(tp), acked);
+	bictcp_update(sk, tcp_snd_cwnd(tp), acked);
 	tcp_cong_avoid_ai(tp, ca->cnt, acked);
 }
 
-- 
2.46.0.295.g3b9ea8a38a-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ