[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADVnQymWJ1Ay=qWVNHeJ=kLVKnNZkTs-U38ZLGS-6JnF+xM4pg@mail.gmail.com>
Date: Wed, 28 Aug 2024 16:09:56 -0400
From: Neal Cardwell <ncardwell@...gle.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org, eric.dumazet@...il.com,
Mingrui Zhang <mrzhang97@...il.com>, Lisong Xu <xu@....edu>, Yuchung Cheng <ycheng@...gle.com>
Subject: Re: [PATCH net] tcp_cubic: switch ca->last_time to usec resolution
On Mon, Aug 26, 2024 at 1:27 PM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Mon, Aug 26, 2024 at 3:26 PM Neal Cardwell <ncardwell@...gle.com> wrote:
> >
> > On Mon, Aug 26, 2024 at 5:27 AM Eric Dumazet <edumazet@...gle.com> wrote:
> > >
> > > bictcp_update() uses ca->last_time as a timestamp
> > > to decide of several heuristics.
> > >
> > > Historically this timestamp has been fed with jiffies,
> > > which has too coarse resolution, some distros are
> > > still using CONFIG_HZ_250=y
> > >
> > > It is time to switch to usec resolution, now TCP stack
> > > already caches in tp->tcp_mstamp the high resolution time.
> > >
> > > Also remove the 'inline' qualifier, this helper is used
> > > once and compilers are smarts.
> > >
> > > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > > Link: https://lore.kernel.org/netdev/20240817163400.2616134-1-mrzhang97@gmail.com/T/#mb6a64c9e2309eb98eaeeeb4b085c4a2270b6789d
> > > Cc: Mingrui Zhang <mrzhang97@...il.com>
> > > Cc: Lisong Xu <xu@....edu>
> > > ---
> > > net/ipv4/tcp_cubic.c | 18 ++++++++++--------
> > > 1 file changed, 10 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
> > > index 5dbed91c6178257df8d2ccd1c8690a10bdbaf56a..3b1845103ee1866a316926a130c212e6f5e78ef0 100644
> > > --- a/net/ipv4/tcp_cubic.c
> > > +++ b/net/ipv4/tcp_cubic.c
> > > @@ -87,7 +87,7 @@ struct bictcp {
> > > u32 cnt; /* increase cwnd by 1 after ACKs */
> > > u32 last_max_cwnd; /* last maximum snd_cwnd */
> > > u32 last_cwnd; /* the last snd_cwnd */
> > > - u32 last_time; /* time when updated last_cwnd */
> > > + u32 last_time; /* time when updated last_cwnd (usec) */
> > > u32 bic_origin_point;/* origin point of bic function */
> > > u32 bic_K; /* time to origin point
> > > from the beginning of the current epoch */
> > > @@ -211,26 +211,28 @@ static u32 cubic_root(u64 a)
> > > /*
> > > * Compute congestion window to use.
> > > */
> > > -static inline void bictcp_update(struct bictcp *ca, u32 cwnd, u32 acked)
> > > +static void bictcp_update(struct sock *sk, u32 cwnd, u32 acked)
> > > {
> > > + const struct tcp_sock *tp = tcp_sk(sk);
> > > + struct bictcp *ca = inet_csk_ca(sk);
> > > u32 delta, bic_target, max_cnt;
> > > u64 offs, t;
> > >
> > > ca->ack_cnt += acked; /* count the number of ACKed packets */
> > >
> > > - if (ca->last_cwnd == cwnd &&
> > > - (s32)(tcp_jiffies32 - ca->last_time) <= HZ / 32)
> > > + delta = tp->tcp_mstamp - ca->last_time;
> > > + if (ca->last_cwnd == cwnd && delta <= USEC_PER_SEC / 32)
> > > return;
> > >
> > > - /* The CUBIC function can update ca->cnt at most once per jiffy.
> > > + /* The CUBIC function can update ca->cnt at most once per ms.
> > > * On all cwnd reduction events, ca->epoch_start is set to 0,
> > > * which will force a recalculation of ca->cnt.
> > > */
> > > - if (ca->epoch_start && tcp_jiffies32 == ca->last_time)
> > > + if (ca->epoch_start && delta < USEC_PER_MSEC)
> > > goto tcp_friendliness;
> >
> > AFAICT there is a problem here. It is switching this line of code to
> > use microsecond resolution without also changing the core CUBIC slope
> > (ca->cnt) calculation to also use microseconds. AFAICT that means we
> > would be re-introducing the bug that was fixed in 2015 in
> > d6b1a8a92a1417f8859a6937d2e6ffe2dfab4e6d (see below). Basically, if
> > the CUBIC slope (ca->cnt) calculation uses jiffies, then we should
> > only run that code once per jiffy, to avoid getting the wrong answer
> > for the slope:
>
> Interesting.... would adding the following part deal with this
> problem, or is it something else ?
>
> diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
> index 3b1845103ee1866a316926a130c212e6f5e78ef0..bff5688ba5109fa5a0bbff7dc529525b2752dc46
> 100644
> --- a/net/ipv4/tcp_cubic.c
> +++ b/net/ipv4/tcp_cubic.c
> @@ -268,9 +268,10 @@ static void bictcp_update(struct sock *sk, u32
> cwnd, u32 acked)
>
> t = (s32)(tcp_jiffies32 - ca->epoch_start);
> t += usecs_to_jiffies(ca->delay_min);
> - /* change the unit from HZ to bictcp_HZ */
> + t = jiffies_to_msecs(t);
> + /* change the unit from ms to bictcp_HZ */
> t <<= BICTCP_HZ;
> - do_div(t, HZ);
> + do_div(t, MSEC_PER_SEC);
>
> if (t < ca->bic_K) /* t - K */
> offs = ca->bic_K - t;
I don't think that would be sufficient to take care of the issue.
The issue (addressed in d6b1a8a92a1417f8859a6937d2e6ffe2dfab4e6d) is
that in the CUBIC bictcp_update() computation of bic_target the input
is tcp_jiffies32. That means that the output bic_target will only
change when the tcp_jiffies32 increments to a new jiffies value.
That means that if we were to go back to executing the bic_target and
ca->cnt computations more than once per jiffy, the ca->cnt "slope"
value becomes increasingly incorrect over the course of each jiffy,
due to the ca->cnt computation looking like:
ca->cnt = cwnd / (bic_target - cwnd);
...and the fact that cwnd can update for each ACK event, while
bic_target is "stuck" during the course of the jiffy due to the jiffy
granularity.
I guess one approach to trying to avoid this issue would be to change
the initial computation of the "t" variable to be in microseconds and
increase BICTCP_HZ from 10 to 20 so that the final value of t also
increases roughly once per microsecond. But then I suspect a lot of
code would have to be tweaked to avoid overflows... e.g., AFAICT with
microsecond units the core logic to cube the offs value would overflow
quite often:
delta = (cube_rtt_scale * offs * offs * offs) >> (10+3*BICTCP_HZ)
IMHO it's safest to just leave last_time in jiffies. :-)
neal
Powered by blists - more mailing lists