[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iL1BkSyE=iSigZxvVB4_59QjWBY_5GuSoH8rcAaZ84EUg@mail.gmail.com>
Date: Fri, 4 Dec 2020 20:10:04 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Hazem Mohamed Abuelfotoh <abuehaze@...zon.com>
Cc: netdev <netdev@...r.kernel.org>, stable@...r.kernel.org,
Yuchung Cheng <ycheng@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>,
Wei Wang <weiwan@...gle.com>,
"Strohman, Andy" <astroh@...zon.com>,
Benjamin Herrenschmidt <benh@...zon.com>
Subject: Re: [PATCH net-next] tcp: optimise receiver buffer autotuning
initialisation for high latency connections
On Fri, Dec 4, 2020 at 7:08 PM Hazem Mohamed Abuelfotoh
<abuehaze@...zon.com> wrote:
>
> Previously receiver buffer auto-tuning starts after receiving
> one advertised window amount of data.After the initial
> receiver buffer was raised by
> commit a337531b942b ("tcp: up initial rmem to 128KB
> and SYN rwin to around 64KB"),the receiver buffer may
> take too long for TCP autotuning to start raising
> the receiver buffer size.
> commit 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
> tried to decrease the threshold at which TCP auto-tuning starts
> but it's doesn't work well in some environments
> where the receiver has large MTU (9001) configured
> specially within environments where RTT is high.
> To address this issue this patch is relying on RCV_MSS
> so auto-tuning can start early regardless
> the receiver configured MTU.
>
> Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
> Fixes: 041a14d26715 ("tcp: start receiver buffer autotuning sooner")
>
> Signed-off-by: Hazem Mohamed Abuelfotoh <abuehaze@...zon.com>
> ---
> net/ipv4/tcp_input.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 389d1b340248..f0ffac9e937b 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -504,13 +504,14 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
> static void tcp_init_buffer_space(struct sock *sk)
> {
> int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win;
> + struct inet_connection_sock *icsk = inet_csk(sk);
> struct tcp_sock *tp = tcp_sk(sk);
> int maxwin;
>
> if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK))
> tcp_sndbuf_expand(sk);
>
> - tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * tp->advmss);
> + tp->rcvq_space.space = min_t(u32, tp->rcv_wnd, TCP_INIT_CWND * icsk->icsk_ack.rcv_mss);
So are you claiming icsk->icsk_ack.rcv_mss is related to MTU 9000 ?
RCV_MSS is not known until we receive actual packets... The initial
value is somthing like 536 if I am not mistaken.
I think your patch does not match the changelog.
> tcp_mstamp_refresh(tp);
> tp->rcvq_space.time = tp->tcp_mstamp;
> tp->rcvq_space.seq = tp->copied_seq;
> --
> 2.16.6
>
>
>
>
> Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284
>
> Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705
>
>
>
Powered by blists - more mailing lists