[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKvf6i7-Ku-iqYG0JoGqfiewx45ZVoYcCRzbDW7g=RDvQ@mail.gmail.com>
Date: Wed, 15 Oct 2025 02:00:19 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: xuanqiang.luo@...ux.dev
Cc: kuniyu@...gle.com, pabeni@...hat.com, kerneljasonxing@...il.com,
davem@...emloft.net, kuba@...nel.org, netdev@...r.kernel.org,
horms@...nel.org, jiayuan.chen@...ux.dev, ncardwell@...gle.com,
dsahern@...nel.org, Xuanqiang Luo <luoxuanqiang@...inos.cn>
Subject: Re: [PATCH net-next v8 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
On Tue, Oct 14, 2025 at 7:04 PM <xuanqiang.luo@...ux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@...inos.cn>
>
> Since ehash lookups are lockless, if one CPU performs a lookup while
> another concurrently deletes and inserts (removing reqsk and inserting sk),
> the lookup may fail to find the socket, an RST may be sent.
>
> The call trace map is drawn as follows:
> CPU 0 CPU 1
> ----- -----
> inet_ehash_insert()
> spin_lock()
> sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
> (lookup failed)
> __sk_nulls_add_node_rcu(sk, list)
> spin_unlock()
>
> As both deletion and insertion operate on the same ehash chain, this patch
> introduces a new sk_nulls_replace_node_init_rcu() helper functions to
> implement atomic replacement.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> Reviewed-by: Kuniyuki Iwashima <kuniyu@...gle.com>
> Reviewed-by: Jiayuan Chen <jiayuan.chen@...ux.dev>
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@...inos.cn>
Reviewed-by: Eric Dumazet <edumazet@...gle.com>
Powered by blists - more mailing lists