[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGHK07B9E0AOBNtqVqKyJQOdU7ijdVi-7jLwnH+=S7ZgG5kpeA@mail.gmail.com>
Date: Fri, 27 Sep 2019 18:25:19 +1000
From: Jonathan Maxwell <jmaxwell37@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Yuchung Cheng <ycheng@...gle.com>,
Marek Majkowski <marek@...udflare.com>
Subject: Re: [PATCH net] tcp: better handle TCP_USER_TIMEOUT in SYN_SENT state
Acked-by: Jon Maxwell <jmaxwell37@...il.com>
Thanks for fixing that Eric.
On Fri, Sep 27, 2019 at 8:42 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> Yuchung Cheng and Marek Majkowski independently reported a weird
> behavior of TCP_USER_TIMEOUT option when used at connect() time.
>
> When the TCP_USER_TIMEOUT is reached, tcp_write_timeout()
> believes the flow should live, and the following condition
> in tcp_clamp_rto_to_user_timeout() programs one jiffie timers :
>
> remaining = icsk->icsk_user_timeout - elapsed;
> if (remaining <= 0)
> return 1; /* user timeout has passed; fire ASAP */
>
> This silly situation ends when the max syn rtx count is reached.
>
> This patch makes sure we honor both TCP_SYNCNT and TCP_USER_TIMEOUT,
> avoiding these spurious SYN packets.
>
> Fixes: b701a99e431d ("tcp: Add tcp_clamp_rto_to_user_timeout() helper to improve accuracy")
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Reported-by: Yuchung Cheng <ycheng@...gle.com>
> Reported-by: Marek Majkowski <marek@...udflare.com>
> Cc: Jon Maxwell <jmaxwell37@...il.com>
> Link: https://marc.info/?l=linux-netdev&m=156940118307949&w=2
> ---
> net/ipv4/tcp_timer.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
> index dbd9d2d0ee63aa46ad2dda417da6ec9409442b77..40de2d2364a1eca14c259d77ebed361d17829eb9 100644
> --- a/net/ipv4/tcp_timer.c
> +++ b/net/ipv4/tcp_timer.c
> @@ -210,7 +210,7 @@ static int tcp_write_timeout(struct sock *sk)
> struct inet_connection_sock *icsk = inet_csk(sk);
> struct tcp_sock *tp = tcp_sk(sk);
> struct net *net = sock_net(sk);
> - bool expired, do_reset;
> + bool expired = false, do_reset;
> int retry_until;
>
> if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
> @@ -242,9 +242,10 @@ static int tcp_write_timeout(struct sock *sk)
> if (tcp_out_of_resources(sk, do_reset))
> return 1;
> }
> + }
> + if (!expired)
> expired = retransmits_timed_out(sk, retry_until,
> icsk->icsk_user_timeout);
> - }
> tcp_fastopen_active_detect_blackhole(sk, expired);
>
> if (BPF_SOCK_OPS_TEST_FLAG(tp, BPF_SOCK_OPS_RTO_CB_FLAG))
> --
> 2.23.0.444.g18eeb5a265-goog
>
Powered by blists - more mailing lists