[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEA6p_BQz1TFiu9sQRit9L-roScxNBkmfMoyyR+vsRyj5BRuCw@mail.gmail.com>
Date: Mon, 19 Jul 2021 08:40:04 -0700
From: Wei Wang <weiwan@...gle.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Yuchung Cheng <ycheng@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>
Subject: Re: [PATCH net] net/tcp_fastopen: fix data races around tfo_active_disable_stamp
On Mon, Jul 19, 2021 at 2:12 AM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> From: Eric Dumazet <edumazet@...gle.com>
>
> tfo_active_disable_stamp is read and written locklessly.
> We need to annotate these accesses appropriately.
>
> Then, we need to perform the atomic_inc(tfo_active_disable_times)
> after the timestamp has been updated, and thus add barriers
> to make sure tcp_fastopen_active_should_disable() wont read
> a stale timestamp.
>
> Fixes: cf1ef3f0719b ("net/tcp_fastopen: Disable active side TFO in certain scenarios")
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Wei Wang <weiwan@...gle.com>
> Cc: Yuchung Cheng <ycheng@...gle.com>
> Cc: Neal Cardwell <ncardwell@...gle.com>
> ---
Thanks Eric!
Acked-by: Wei Wang <weiwan@...gle.com>
> net/ipv4/tcp_fastopen.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
> index 47c32604d38fca960d2cd56f3588bfd2e390b789..b32af76e21325373126b51423496e3b8d47d97ff 100644
> --- a/net/ipv4/tcp_fastopen.c
> +++ b/net/ipv4/tcp_fastopen.c
> @@ -507,8 +507,15 @@ void tcp_fastopen_active_disable(struct sock *sk)
> {
> struct net *net = sock_net(sk);
>
> + /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
> + WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);
> +
> + /* Paired with smp_rmb() in tcp_fastopen_active_should_disable().
> + * We want net->ipv4.tfo_active_disable_stamp to be updated first.
> + */
> + smp_mb__before_atomic();
> atomic_inc(&net->ipv4.tfo_active_disable_times);
> - net->ipv4.tfo_active_disable_stamp = jiffies;
> +
> NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);
> }
>
> @@ -526,10 +533,16 @@ bool tcp_fastopen_active_should_disable(struct sock *sk)
> if (!tfo_da_times)
> return false;
>
> + /* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */
> + smp_rmb();
> +
> /* Limit timeout to max: 2^6 * initial timeout */
> multiplier = 1 << min(tfo_da_times - 1, 6);
> - timeout = multiplier * tfo_bh_timeout * HZ;
> - if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))
> +
> + /* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */
> + timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +
> + multiplier * tfo_bh_timeout * HZ;
> + if (time_before(jiffies, timeout))
> return true;
>
> /* Mark check bit so we can check for successful active TFO
> --
> 2.32.0.402.g57bb445576-goog
>
Powered by blists - more mailing lists