[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKR6XoB6tfJ2wLK1LqkNE1FboFO-PeOpuLNM1_5KOM53Q@mail.gmail.com>
Date: Sat, 14 Jan 2023 13:30:51 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, yoshfuji@...ux-ipv6.org, dsahern@...nel.org,
kuba@...nel.org--cc, pabeni@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net] tcp: avoid the lookup process failing to get sk in
ehash table
()
On Sat, Jan 14, 2023 at 1:06 PM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> On Sat, Jan 14, 2023 at 5:45 PM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Thu, Jan 12, 2023 at 7:54 AM Jason Xing <kerneljasonxing@...il.com> wrote:
> > >
> > > From: Jason Xing <kernelxing@...cent.com>
> > >
> > > While one cpu is working on looking up the right socket from ehash
> > > table, another cpu is done deleting the request socket and is about
> > > to add (or is adding) the big socket from the table. It means that
> > > we could miss both of them, even though it has little chance.
> > >
> > > Let me draw a call trace map of the server side.
> > > CPU 0 CPU 1
> > > ----- -----
> > > tcp_v4_rcv() syn_recv_sock()
> > > inet_ehash_insert()
> > > -> sk_nulls_del_node_init_rcu(osk)
> > > __inet_lookup_established()
> > > -> __sk_nulls_add_node_rcu(sk, list)
> > >
> > > Notice that the CPU 0 is receiving the data after the final ack
> > > during 3-way shakehands and CPU 1 is still handling the final ack.
> > >
> > > Why could this be a real problem?
> > > This case is happening only when the final ack and the first data
> > > receiving by different CPUs. Then the server receiving data with
> > > ACK flag tries to search one proper established socket from ehash
> > > table, but apparently it fails as my map shows above. After that,
> > > the server fetches a listener socket and then sends a RST because
> > > it finds a ACK flag in the skb (data), which obeys RST definition
> > > in RFC 793.
> > >
> > > Many thanks to Eric for great help from beginning to end.
> > >
> > > Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> > > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > > ---
> > > net/ipv4/inet_hashtables.c | 10 ++++++++++
> > > 1 file changed, 10 insertions(+)
> > >
> > > diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> > > index 24a38b56fab9..18f88cb4efcb 100644
> > > --- a/net/ipv4/inet_hashtables.c
> > > +++ b/net/ipv4/inet_hashtables.c
> > > @@ -650,7 +650,16 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> > > spin_lock(lock);
> > > if (osk) {
> > > WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> > > + if (sk_hashed(osk))
> > > + /* Before deleting the node, we insert a new one to make
> > > + * sure that the look-up=sk process would not miss either
> > > + * of them and that at least one node would exist in ehash
> > > + * table all the time. Otherwise there's a tiny chance
> > > + * that lookup process could find nothing in ehash table.
> > > + */
> > > + __sk_nulls_add_node_rcu(sk, list);
> >
> > In our private email exchange, I suggested to insert sk at the _tail_
> > of the hash bucket.
> >
>
> Yes, I noticed that. At that time I kept considering the race
> condition of the RCU itself, not the scene you mentioned as below.
>
> > Inserting it at the _head_ would still leave a race condition, because
> > a concurrent reader might
> > have already started the bucket traversal, and would not see 'sk'.
>
> Thanks for the detailed explanation. Now I see why. I'll replace it
> with __sk_nulls_add_node_tail_rcu() function and send the v2 patch.
>
> By the way, I checked the removal of TIMEWAIT socket which is included
> in this patch.
> I write down the call-trace:
> inet_hash_connect()
> -> __inet_hash_connect()
> -> if (sk_unhashed(sk)) {
> inet_ehash_nolisten(sk, (struct sock *)tw, NULL);
> -> inet_ehash_insert(sk, osk, found_dup_sk);
> Therefore, this patch covers the timewait case.
This is the path handling the TIME_WAIT ---> ESTABLISH case.
I was referring to the more common opposite case which is the case
where a race could possibly happen.
This is inet_twsk_hashdance, and I suspect we want something like:
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 1d77d992e6e77f7d96bd061be6dbb802c2566b3f..6d681ef52bb24b984a9dbda25b19291fc4393914
100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -91,10 +91,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
}
EXPORT_SYMBOL_GPL(inet_twsk_put);
-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
+static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
struct hlist_nulls_head *list)
{
- hlist_nulls_add_head_rcu(&tw->tw_node, list);
+ hlist_nulls_add_tail_rcu(&tw->tw_node, list);
}
static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
@@ -147,7 +147,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock
*tw, struct sock *sk,
spin_lock(lock);
- inet_twsk_add_node_rcu(tw, &ehead->chain);
+ inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
/* Step 3: Remove SK from hash chain */
if (__sk_nulls_del_node_init_rcu(sk))
Powered by blists - more mailing lists