[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e1011a8-70bd-468d-96b2-a306039b97f9@redhat.com>
Date: Wed, 19 Nov 2025 09:59:45 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski
<kuba@...nel.org>, Simon Horman <horms@...nel.org>,
Neal Cardwell <ncardwell@...gle.com>, Kuniyuki Iwashima <kuniyu@...gle.com>,
netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 2/2] tcp: add net.ipv4.tcp_rtt_threshold sysctl
On 11/18/25 10:22 PM, Eric Dumazet wrote:
> I would perhaps use 8 senders, and force all receivers on one cpu (cpu
> 4 in the following run)
>
> for i in {1..8}
> do
> netperf -H host -T,4 -l 100 &
> done
>
> This would I think show what can happen when receivers can not keep up.
Thanks for the suggestion. I should have understood the receiver needs
to be under stress in the relevant scenario.
With the above setup, on vanilla kernel, the rcvbuf I see is:
min 2134391 max 33554432 avg 12085941
with multiple connections hitting tcp_rmem[2]
with the patched kernel:
min 1192472 max 33554432 avg 4247351
there is a single outlier hitting tcp_rmem[2], and in that case the
connection observes for some samples a rtt just above tcp_rtt_threshold
sysctl/tcp_rcvbuf_low_rtt.
FWIW I guess you can add:
Tested-by: Paolo Abeni <pabeni@...hat.com>
Thanks,
Paolo
Powered by blists - more mailing lists