[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKQjN1YiHqBTV3+zDYo0G11p-6=p7C-1GvFCp8Y=r4nvQ@mail.gmail.com>
Date: Sat, 14 Jan 2023 10:45:23 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, yoshfuji@...ux-ipv6.org, dsahern@...nel.org,
kuba@...nel.org--cc, pabeni@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net] tcp: avoid the lookup process failing to get sk in
ehash table
On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> From: Jason Xing <kernelxing@...cent.com>
>
> While one cpu is working on looking up the right socket from ehash
> table, another cpu is done deleting the request socket and is about
> to add (or is adding) the big socket from the table. It means that
> we could miss both of them, even though it has little chance.
>
> Let me draw a call trace map of the server side.
> CPU 0 CPU 1
> ----- -----
> tcp_v4_rcv() syn_recv_sock()
> inet_ehash_insert()
> -> sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
> -> __sk_nulls_add_node_rcu(sk, list)
>
> Notice that the CPU 0 is receiving the data after the final ack
> during 3-way shakehands and CPU 1 is still handling the final ack.
>
> Why could this be a real problem?
> This case is happening only when the final ack and the first data
> receiving by different CPUs. Then the server receiving data with
> ACK flag tries to search one proper established socket from ehash
> table, but apparently it fails as my map shows above. After that,
> the server fetches a listener socket and then sends a RST because
> it finds a ACK flag in the skb (data), which obeys RST definition
> in RFC 793.
>
> Many thanks to Eric for great help from beginning to end.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
> ---
> net/ipv4/inet_hashtables.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 24a38b56fab9..18f88cb4efcb 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> spin_lock(lock);
> if (osk) {
> WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> + if (sk_hashed(osk))
> + /* Before deleting the node, we insert a new one to make
> + * sure that the look-up=sk process would not miss either
> + * of them and that at least one node would exist in ehash
> + * table all the time. Otherwise there's a tiny chance
> + * that lookup process could find nothing in ehash table.
> + */
> + __sk_nulls_add_node_rcu(sk, list);
In our private email exchange, I suggested to insert sk at the _tail_
of the hash bucket.
Inserting it at the _head_ would still leave a race condition, because
a concurrent reader might
have already started the bucket traversal, and would not see 'sk'.
Thanks.
> ret = sk_nulls_del_node_init_rcu(osk);
> + goto unlock;
> } else if (found_dup_sk) {
> *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> if (*found_dup_sk)
> @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> if (ret)
> __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
> spin_unlock(lock);
>
> return ret;
> --
> 2.37.3
>
Powered by blists - more mailing lists