[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABWYdi0ZHYvzzP9SFOCJhnfyMP12Ot9ALEmXg75oeXBWRAD8KQ@mail.gmail.com>
Date: Thu, 13 Jan 2022 14:56:42 -0800
From: Ivan Babrou <ivan@...udflare.com>
To: Dave Taht <dave.taht@...il.com>
Cc: bpf <bpf@...r.kernel.org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...udflare.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH bpf-next] tcp: bpf: Add TCP_BPF_RCV_SSTHRESH for bpf_setsockopt
On Wed, Jan 12, 2022 at 1:02 PM Dave Taht <dave.taht@...il.com> wrote:
> I would not use the word "latency" in this way, I would just say
> potentially reducing
> roundtrips...
Roundtrips translate directly into latency on high latency links.
> and potentially massively increasing packet loss, oversaturating
> links, and otherwise
> hurting latency for other applications sharing the link, including the
> application
> that advertised an extreme window like this.
The receive window is going to scale up to tcp_rmem[2] with traffic,
and packet loss won't stop it. That's around 3MiB on anything that's
not embedded these days.
My understanding is that congestion control on the sender side deals
with packet loss, bottleneck saturation, and packet pacing. This patch
only touches the receiving side, letting the client scale up faster if
they choose to do so. I don't think any out of the box sender will
make use of this, even if we enable it on the receiver, just because
the sender's congestion control constraints are lower (like
initcwnd=10).
Let me know if any of this doesn't look right to you.
> This overall focus tends to freak me out somewhat, especially when
> faced with further statements that cloudflare is using an initcwnd of 250!???
Congestion window is a learned property, not a static number. You
won't get a large initcwnd towards a poor connection.
We have a dedicated backbone with different properties.
Powered by blists - more mailing lists