lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoB2ZpgM6HM+m=wF2EkQ5caeettcbeUQQBxpLWVuwSSxbw@mail.gmail.com>
Date:   Sat, 14 Jan 2023 10:14:50 +0800
From:   Jason Xing <kerneljasonxing@...il.com>
To:     edumazet@...gle.com, davem@...emloft.net, yoshfuji@...ux-ipv6.org,
        dsahern@...nel.org, pabeni@...hat.com, kuba@...nel.org
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net] tcp: avoid the lookup process failing to get sk in
 ehash table

On Thu, Jan 12, 2023 at 2:54 PM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> From: Jason Xing <kernelxing@...cent.com>
>
> While one cpu is working on looking up the right socket from ehash
> table, another cpu is done deleting the request socket and is about
> to add (or is adding) the big socket from the table. It means that
> we could miss both of them, even though it has little chance.
>
> Let me draw a call trace map of the server side.
>    CPU 0                           CPU 1
>    -----                           -----
> tcp_v4_rcv()                  syn_recv_sock()
>                             inet_ehash_insert()
>                             -> sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
>                             -> __sk_nulls_add_node_rcu(sk, list)
>
> Notice that the CPU 0 is receiving the data after the final ack
> during 3-way shakehands and CPU 1 is still handling the final ack.
>
> Why could this be a real problem?
> This case is happening only when the final ack and the first data
> receiving by different CPUs. Then the server receiving data with
> ACK flag tries to search one proper established socket from ehash
> table, but apparently it fails as my map shows above. After that,
> the server fetches a listener socket and then sends a RST because
> it finds a ACK flag in the skb (data), which obeys RST definition
> in RFC 793.
>
> Many thanks to Eric for great help from beginning to end.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")

I extracted one part from the commit 5e0724d027f0 as follows.

@@ -423,30 +423,41 @@ int inet_ehash_insert(struct sock *sk, struct sock *osk)
……
-     __sk_nulls_add_node_rcu(sk, list);
      if (osk) {
-         WARN_ON(sk->sk_hash != osk->sk_hash);
-         sk_nulls_del_node_init_rcu(osk);
+        WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
+        ret = sk_nulls_del_node_init_rcu(osk);
      }
+    if (ret)
+         __sk_nulls_add_node_rcu(sk, list);
……

In this patch I submitted, I reverse, or we can say, restore the
original order of inserting and deleting before the commit
5e0724d027f0 as Eric suggested.

I believe it does not have an impact on other user cases.  The only
thing I want to do is fix this issue as soon as possible no matter
what exactly kind of patch gets merged and who writes the patch at
last if there is a better one.
At that time I'll get the big information back to my customers who
complain about this issue more often than not and tell them "see the
kernel community settles completely".

So could someone please take some time to help me review the patch?
It's not complicated. Thank you from the bottom of my heart in
advance.

Jason

> Signed-off-by: Jason Xing <kernelxing@...cent.com>
> ---
>  net/ipv4/inet_hashtables.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 24a38b56fab9..18f88cb4efcb 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         spin_lock(lock);
>         if (osk) {
>                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> +               if (sk_hashed(osk))
> +                       /* Before deleting the node, we insert a new one to make
> +                        * sure that the look-up=sk process would not miss either
> +                        * of them and that at least one node would exist in ehash
> +                        * table all the time. Otherwise there's a tiny chance
> +                        * that lookup process could find nothing in ehash table.
> +                        */
> +                       __sk_nulls_add_node_rcu(sk, list);
>                 ret = sk_nulls_del_node_init_rcu(osk);
> +               goto unlock;
>         } else if (found_dup_sk) {
>                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>                 if (*found_dup_sk)
> @@ -660,6 +669,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         if (ret)
>                 __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
>         spin_unlock(lock);
>
>         return ret;
> --
> 2.37.3
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ