[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iL9g1Hxd74uvencxthK8aWNLtFKAHjtSm4o5aWsb7y8fQ@mail.gmail.com>
Date: Wed, 19 Nov 2025 01:07:24 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Simon Horman <horms@...nel.org>, Neal Cardwell <ncardwell@...gle.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 2/2] tcp: add net.ipv4.tcp_rtt_threshold sysctl
On Wed, Nov 19, 2025 at 12:59 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On 11/18/25 10:22 PM, Eric Dumazet wrote:
> > I would perhaps use 8 senders, and force all receivers on one cpu (cpu
> > 4 in the following run)
> >
> > for i in {1..8}
> > do
> > netperf -H host -T,4 -l 100 &
> > done
> >
> > This would I think show what can happen when receivers can not keep up.
>
> Thanks for the suggestion. I should have understood the receiver needs
> to be under stress in the relevant scenario.
>
> With the above setup, on vanilla kernel, the rcvbuf I see is:
>
> min 2134391 max 33554432 avg 12085941
>
> with multiple connections hitting tcp_rmem[2]
>
> with the patched kernel:
>
> min 1192472 max 33554432 avg 4247351
>
> there is a single outlier hitting tcp_rmem[2], and in that case the
> connection observes for some samples a rtt just above tcp_rtt_threshold
> sysctl/tcp_rcvbuf_low_rtt.
For very long flows, scheduling glitches on receivers tend to inflate
the @copied part and can
lead to a wrong tcp_rcvbuf_grow() response.
I think DRS is reasonably effective, but as many heuristics can be
slightly wrong in some cases.
>
> FWIW I guess you can add:
>
> Tested-by: Paolo Abeni <pabeni@...hat.com>
>
> Thanks,
>
> Paolo
>
Powered by blists - more mailing lists