[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <osfubz5wloxmthq5kcvzrpcszmpself2lijlc6duw57tbyh565@7cbkpapsmokb>
Date: Sat, 27 Sep 2025 10:56:12 +0800
From: Jiayuan Chen <jiayuan.chen@...ux.dev>
To: xuanqiang.luo@...ux.dev
Cc: edumazet@...gle.com, kuniyu@...gle.com,
"Paul E. McKenney" <paulmck@...nel.org>, kerneljasonxing@...il.com, davem@...emloft.net, kuba@...nel.org,
netdev@...r.kernel.org, Xuanqiang Luo <luoxuanqiang@...inos.cn>,
Frederic Weisbecker <frederic@...nel.org>, Neeraj Upadhyay <neeraj.upadhyay@...nel.org>
Subject: Re: [PATCH net-next v7 0/3] net: Avoid ehash lookup races
On Fri, Sep 26, 2025 at 03:40:30PM +0800, xuanqiang.luo@...ux.dev wrote:
> From: Xuanqiang Luo <luoxuanqiang@...inos.cn>
>
> After replacing R/W locks with RCU in commit 3ab5aee7fe84 ("net: Convert
> TCP & DCCP hash tables to use RCU / hlist_nulls"), a race window emerged
> during the switch from reqsk/sk to sk/tw.
>
> Now that both timewait sock (tw) and full sock (sk) reside on the same
> ehash chain, it is appropriate to introduce hlist_nulls replace
> operations, to eliminate the race conditions caused by this window.
>
> Before this series of patches, I previously sent another version of the
> patch, attempting to avoid the issue using a lock mechanism. However, it
> seems there are some problems with that approach now, so I've switched to
> the "replace" method in the current patches to resolve the issue.
> For details, refer to:
> https://lore.kernel.org/netdev/20250903024406.2418362-1-xuanqiang.luo@linux.dev/
>
> Before I encountered this type of issue recently, I found there had been
> several historical discussions about it. Therefore, I'm adding this
> background information for those interested to reference:
> 1. https://lore.kernel.org/lkml/20230118015941.1313-1-kerneljasonxing@gmail.com/
> 2. https://lore.kernel.org/netdev/20230606064306.9192-1-duanmuquan@baidu.com/
Reviewed-by: Jiayuan Chen <jiayuan.chen@...ux.dev>
---
Thank you Xuanqiang and Kuniyuki. This issue appears to have existed for a
long time. Under normal circumstances, it can be avoided when RSS or RPS is
enabled.
However, we have recently been experiencing it frequently in our production
environment. The root cause is that TCP traffic is encapsulated using VXLAN,
but the same TCP flow does not use the same UDP 4-tuple. This leads to
concurrency when the host processes the VXLAN encapsulation.
I tested this patch and it fixed this issue.
Powered by blists - more mailing lists