[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUDwKTOpJAHU7W2rkjb91U8WE6mL3vdTxx_3wAb4C-M4vQ@mail.gmail.com>
Date: Mon, 15 Sep 2025 19:18:03 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: luoxuanqiang <xuanqiang.luo@...ux.dev>
Cc: edumazet@...gle.com, kerneljasonxing@...il.com, davem@...emloft.net,
kuba@...nel.org, netdev@...r.kernel.org,
Xuanqiang Luo <luoxuanqiang@...inos.cn>
Subject: Re: [PATCH net-next v1 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
On Mon, Sep 15, 2025 at 6:57 PM luoxuanqiang <xuanqiang.luo@...ux.dev> wrote:
>
>
> 在 2025/9/16 07:00, Kuniyuki Iwashima 写道:
> > On Mon, Sep 15, 2025 at 12:04 AM <xuanqiang.luo@...ux.dev> wrote:
> >> From: Xuanqiang Luo <luoxuanqiang@...inos.cn>
> >>
> >> Since ehash lookups are lockless, if one CPU performs a lookup while
> >> another concurrently deletes and inserts (removing reqsk and inserting sk),
> >> the lookup may fail to find the socket, an RST may be sent.
> >>
> >> The call trace map is drawn as follows:
> >> CPU 0 CPU 1
> >> ----- -----
> >> inet_ehash_insert()
> >> spin_lock()
> >> sk_nulls_del_node_init_rcu(osk)
> >> __inet_lookup_established()
> >> (lookup failed)
> >> __sk_nulls_add_node_rcu(sk, list)
> >> spin_unlock()
> >>
> >> As both deletion and insertion operate on the same ehash chain, this patch
> >> introduces two new sk_nulls_replace_* helper functions to implement atomic
> >> replacement.
> >>
> >> If sk_nulls_replace_node_init_rcu() fails, it indicates osk is either
> >> hlist_unhashed or hlist_nulls_unhashed. The former returns false; the
> >> latter performs insertion without deletion.
> >>
> >> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@...inos.cn>
> >> ---
> >> include/net/sock.h | 23 +++++++++++++++++++++++
> >> net/ipv4/inet_hashtables.c | 7 +++++++
> >> 2 files changed, 30 insertions(+)
> >>
> >> diff --git a/include/net/sock.h b/include/net/sock.h
> >> index 896bec2d2176..26dacf7bc93e 100644
> >> --- a/include/net/sock.h
> >> +++ b/include/net/sock.h
> >> @@ -859,6 +859,29 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk)
> >> return rc;
> >> }
> >>
> >> +static inline bool __sk_nulls_replace_node_init_rcu(struct sock *old,
> >> + struct sock *new)
> >> +{
> >> + if (sk_hashed(old) &&
> >> + hlist_nulls_replace_init_rcu(&old->sk_nulls_node,
> >> + &new->sk_nulls_node))
> >> + return true;
> >> +
> >> + return false;
> >> +}
> >> +
> >> +static inline bool sk_nulls_replace_node_init_rcu(struct sock *old,
> >> + struct sock *new)
> >> +{
> >> + bool rc = __sk_nulls_replace_node_init_rcu(old, new);
> >> +
> >> + if (rc) {
> >> + WARN_ON(refcount_read(&old->sk_refcnt) == 1);
> >> + __sock_put(old);
> >> + }
> >> + return rc;
> >> +}
> >> +
> >> static inline void __sk_add_node(struct sock *sk, struct hlist_head *list)
> >> {
> >> hlist_add_head(&sk->sk_node, list);
> >> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> >> index ef4ccfd46ff6..7803fd3cc8e9 100644
> >> --- a/net/ipv4/inet_hashtables.c
> >> +++ b/net/ipv4/inet_hashtables.c
> >> @@ -685,6 +685,12 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >> spin_lock(lock);
> >> if (osk) {
> >> WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> >> + /* Since osk and sk should be in the same ehash bucket, try
> >> + * direct replacement to avoid lookup gaps. On failure, no
> >> + * changes. sk_nulls_del_node_init_rcu() will handle the rest.
> > Both sk_nulls_replace_node_init_rcu() and
> > sk_nulls_del_node_init_rcu() return true only when
> > sk_hashed(osk) == true.
> >
> > Only thing sk_nulls_del_node_init_rcu() does is to
> > set ret to false.
> >
> >
> >> + */
> >> + if (sk_nulls_replace_node_init_rcu(osk, sk))
> >> + goto unlock;
> >> ret = sk_nulls_del_node_init_rcu(osk);
> > So, should we simply do
> >
> > ret = sk_nulls_replace_node_init_rcu(osk, sk);
> > goto unlock;
> >
> > ?
>
> sk_nulls_replace_node_init_rcu() only returns true if both
> sk_hashed(osk) == true and hlist_nulls_unhashed(old) == false.
sk_hashed(sk) == !hlist_nulls_unhashed(&sk->sk_nulls_node)
is always true as sk_node and sk_nulls_node are in union.
> However, in the original sk_nulls_del_node_init_rcu() logic, when
> sk_hashed(osk) == true,
So this should be an unreachable branch.
> it always performs __sock_put(sk) regardless of
> the hlist_nulls_unhashed(old) check. Therefore, if
> sk_nulls_replace_node_init_rcu() fails, we can safely let ret or
> __sock_put(sk) be handled by the subsequent
> sk_nulls_del_node_init_rcu(osk) call. Thanks Xuanqiang.
>
> >
> >> } else if (found_dup_sk) {
> >> *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
> >> @@ -695,6 +701,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
> >> if (ret)
> >> __sk_nulls_add_node_rcu(sk, list);
> >>
> >> +unlock:
> >> spin_unlock(lock);
> >>
> >> return ret;
> >> --
> >> 2.27.0
> >>
Powered by blists - more mailing lists