[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5+LjKLcCaNr5wJGPrXhbjvLhts8hqpKPFx7JeWG4g0AA@mail.gmail.com>
Date: Thu, 13 Jan 2022 21:43:57 -0800
From: Dave Taht <dave.taht@...il.com>
To: Ivan Babrou <ivan@...udflare.com>
Cc: bpf <bpf@...r.kernel.org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...udflare.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH bpf-next] tcp: bpf: Add TCP_BPF_RCV_SSTHRESH for bpf_setsockopt
On Thu, Jan 13, 2022 at 2:56 PM Ivan Babrou <ivan@...udflare.com> wrote:
>
> On Wed, Jan 12, 2022 at 1:02 PM Dave Taht <dave.taht@...il.com> wrote:
> > I would not use the word "latency" in this way, I would just say
> > potentially reducing
> > roundtrips...
>
> Roundtrips translate directly into latency on high latency links.
Yes, but with the caveats below. I'm fine with you just saying round trips,
and making this api possible.
It would comfort me further if you could provide an actual scenario.
See also:
https://datatracker.ietf.org/doc/html/rfc6928
which predates packet pacing (are you using sch_fq?)
>
> > and potentially massively increasing packet loss, oversaturating
> > links, and otherwise
> > hurting latency for other applications sharing the link, including the
> > application
> > that advertised an extreme window like this.
>
> The receive window is going to scale up to tcp_rmem[2] with traffic,
> and packet loss won't stop it. That's around 3MiB on anything that's
> not embedded these days.
>
> My understanding is that congestion control on the sender side deals
> with packet loss, bottleneck saturation, and packet pacing. This patch
> only touches the receiving side, letting the client scale up faster if
> they choose to do so. I don't think any out of the box sender will
> make use of this, even if we enable it on the receiver, just because
> the sender's congestion control constraints are lower (like
> initcwnd=10).
I've always kind of not liked the sender/reciever "language" in tcp.
they are peers.
> Let me know if any of this doesn't look right to you.
>
> > This overall focus tends to freak me out somewhat, especially when
> > faced with further statements that cloudflare is using an initcwnd of 250!???
>
> Congestion window is a learned property, not a static number. You
> won't get a large initcwnd towards a poor connection.
initcwnd is set globally or on a per route basis.
> We have a dedicated backbone with different properties.
It's not so much that I don't think your backbone can handle this...
... it's the prospect of handing whiskey, car keys and excessive
initcwnd to teenage boys on a saturday night.
--
I tried to build a better future, a few times:
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
Dave Täht CEO, TekLibre, LLC
Powered by blists - more mailing lists