lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADVnQyn2pC5Vjym490ZjjUqak0wRiV5OBhtFU8hqrM6AQQht+g@mail.gmail.com>
Date: Wed, 28 Aug 2024 16:32:37 -0400
From: Neal Cardwell <ncardwell@...gle.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Mingrui Zhang <mrzhang97@...il.com>, davem@...emloft.net, netdev@...r.kernel.org, 
	Lisong Xu <xu@....edu>
Subject: Re: [PATCH net v4 1/3] tcp_cubic: fix to run bictcp_update() at least
 once per RTT

On Mon, Aug 26, 2024 at 5:26 AM Eric Dumazet <edumazet@...gle.com> wrote:
...
> I prefer you rebase your patch after mine is merged.
>
> There is a common misconception with jiffies.
> It can change in less than 20 nsec.
> Assuming that delta(jiffies) == 1 means that 1ms has elapsed is plain wrong.
> In the old days, linux TCP only could rely on jiffies and we had to
> accept its limits.
> We now can switch to high resolution clocks, without extra costs,
> because we already cache in tcp->tcp_mstamp
> the usec timestamp for the current time.
>
> Some distros are using CONFIG_HZ_250=y or CONFIG_HZ_100=y, this means
> current logic in cubic is more fuzzy for them.
>
> Without ca->last_time conversion to jiffies, your patch would still be
> limited to jiffies resolution:
> usecs_to_jiffies(ca->delay_min) would round up to much bigger values
> for DC communications.

Even given Eric's excellent point that is raised above, that an
increase of jiffies by one can happen even though only O(us) or less
may have elapsed, AFAICT the patch should be fine in practice.

The patch says:

+       /* Update 32 times per second if RTT > 1/32 second,
+        * or every RTT if RTT < 1/32 second even when last_cwnd == cwnd
+        */
        if (ca->last_cwnd == cwnd &&
-           (s32)(tcp_jiffies32 - ca->last_time) <= HZ / 32)
+           (s32)(tcp_jiffies32 - ca->last_time) <=
+           min_t(s32, HZ / 32, usecs_to_jiffies(ca->delay_min)))
                return;

So, basically, we only run fall through and try to run the core of
bictcp_update() if cwnd has increased since ca-> last_cwnd, or
tcp_jiffies32 has increased by more than
min_t(s32, HZ / 32, usecs_to_jiffies(ca->delay_min)) since ca->last_time.

AFAICT  this works out OK because the logic is looking for "more than"
and usecs_to_jiffies() rounds up. That means that in the
interesting/tricky/common case where ca->delay_min is less than a
jiffy, usecs_to_jiffies(ca->delay_min) will return 1 jiffy. That means
that in this case we will only fall through and try to run the core of
bictcp_update() if cwnd has increased since ca-> last_cwnd, or
tcp_jiffies32 has increased by more than 1 jiffy (i.e., 2 or more
jiffies).

AFAICT the fact that this check is triggering only if tcp_jiffies32
has increased by 2 or more means that  at least one full jiffy has
elapsed between when we set ca->last_time and the time when this check
triggers running the core of bictcp_update().

So AFAICT this logic is not tricked by the fact that a single
increment of tcp_jiffies32 can happen over O(us) or less.

At first glance it may sound like if the RTT is much less than a
jiffy, many RTTs could elapse before we run the core of
bictcp_update(). However,  AFAIK if the RTT is much less than a jiffy
then CUBIC is very likely in Reno mode, and so is very likely to
increase cwnd by roughly 1 packet per round trip (the behavior of
Reno), so the (ca->last_cwnd == cwnd) condition should fail roughly
once per round trip and allow recomputation of the ca->cnt slope.

So AFAICT this patch should be OK in practice.

Given those considerations, Eric, do you think it would be OK to
accept the patch as-is?

Thanks!

neal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ