[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <175959a4-fe0d-472b-96c7-c8ae38e1404b@linux.dev>
Date: Wed, 3 Sep 2025 16:03:16 +0800
From: luoxuanqiang <xuanqiang.luo@...ux.dev>
To: Jason Xing <kerneljasonxing@...il.com>, Eric Dumazet <edumazet@...gle.com>
Cc: kuniyu@...gle.com, davem@...emloft.net, kuba@...nel.org,
kernelxing@...cent.com, netdev@...r.kernel.org,
Xuanqiang Luo <luoxuanqiang@...inos.cn>
Subject: Re: [PATCH net] inet: Avoid established lookup missing active sk
在 2025/9/3 14:52, Jason Xing 写道:
> On Wed, Sep 3, 2025 at 2:40 PM Eric Dumazet <edumazet@...gle.com> wrote:
>> On Tue, Sep 2, 2025 at 7:46 PM Xuanqiang Luo <xuanqiang.luo@...ux.dev> wrote:
>>> From: Xuanqiang Luo <luoxuanqiang@...inos.cn>
>>>
>>> Since the lookup of sk in ehash is lockless, when one CPU is performing a
>>> lookup while another CPU is executing delete and insert operations
>>> (deleting reqsk and inserting sk), the lookup CPU may miss either of
>>> them, if sk cannot be found, an RST may be sent.
>>>
>>> The call trace map is drawn as follows:
>>> CPU 0 CPU 1
>>> ----- -----
>>> spin_lock()
>>> sk_nulls_del_node_init_rcu(osk)
>>> __inet_lookup_established()
>>> __sk_nulls_add_node_rcu(sk, list)
>>> spin_unlock()
>>>
>>> We can try using spin_lock()/spin_unlock() to wait for ehash updates
>>> (ensuring all deletions and insertions are completed) after a failed
>>> lookup in ehash, then lookup sk again after the update. Since the sk
>>> expected to be found is unlikely to encounter the aforementioned scenario
>>> multiple times consecutively, we only need one update.
>> No need for a lock really...
>> - add the new node (with a temporary 'wrong' nulls value),
>> - delete the old node
>> - replace the nulls value by the expected one.
> Yes. The plan is simple enough to fix this particular issue and I
> verified in production long ago. Sadly the following patch got
> reverted...
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=3f4ca5fafc08881d7a57daa20449d171f2887043
>
> Thanks,
> Jason
Yes, I'm fully aware of this history. I was excited when this issue was
once fixed, because we've already encountered this type of RST issue
many times.
Also, I'm sharing the link to our previous discussion about this type of
issue. If other people see this email, it might be easier for them to
get the full details:
https://lore.kernel.org/netdev/20230615121345.83597-1-duanmuquan@baidu.com/
https://lore.kernel.org/lkml/20230112065336.41034-1-kerneljasonxing@gmail.com/
Powered by blists - more mailing lists