[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+9cOC4Ftnh2q7SZ+iP7-qe2jkFW3NtvFGzXLxoGUOsiA@mail.gmail.com>
Date: Thu, 6 Jan 2022 00:25:15 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Ivan Babrou <ivan@...udflare.com>
Cc: Stephen Hemminger <stephen@...workplumber.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...udflare.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH net] tcp: note that tcp_rmem[1] has a limited range
On Wed, Jan 5, 2022 at 8:20 PM Ivan Babrou <ivan@...udflare.com> wrote:
>
> On Tue, Jan 4, 2022 at 12:33 AM Eric Dumazet <edumazet@...gle.com> wrote:
> > I guess you have to define what is the initial window.
>
> What I mean here is the first window after scaling is allowed, so the
> one that appears in the first non-SYN ACK.
>
> > There seems to be a confusion between rcv_ssthresh and sk_rcvbuf
> >
> > If you want to document what is rcv_ssthresh and how it relates to sk_rcvbuf,
> > you probably need more than few lines in Documentation/networking/ip-sysctl.rst
>
> I can't say I fully understand how buffer sizes grow and how
> rcv_ssthresh and sk_rcvbuf interact to document this properly.
>
> All I want is to document the fact that no matter what you punch into
> sysctls, you'll end up with an initial scaled window (defined above)
> that's no higher than 64k. Let me know if this is incorrect and if
> there's a way we can put this into words without going into too much
> detail.
Just to clarify, normal TCP 3WHS has a final ACK packet, where window
scaling is enabled.
You describe a possible issue of passive connections.
Most of the time, servers want some kind of control before allowing a
remote peer to send MB of payload in the first round trip.
However, a typical connection starts with IW10 (rfc 6928), and
standard TCP congestion
control would implement Slow Start, doubling the payload at every round trip,
so this is not an issue.
If you want to enable bigger than 65535 RWIN for passive connections,
this would violate standards and should be discussed first at IETF.
If you want to enable bigger than 65535 RWIN for passive connections
in a controlled environment, I suggest using an eBPF program to do so.
>
> > Please do not. We set this sysctl to 0.5 MB
> > DRS is known to have quantization artifacts.
>
> Where can I read more about the quantization artifacts you mentioned?
DRS is implemented in tcp_rcv_space_adjust()/tcp_rcv_rtt_update(),
you can look at git history to get plenty of details.
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c3916ad9320eed8eacd7c0b2cf7f881efceda892
Powered by blists - more mailing lists