[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250726010846.1105875-1-kuba@kernel.org>
Date: Fri, 25 Jul 2025 18:08:46 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: davem@...emloft.net
Cc: netdev@...r.kernel.org,
edumazet@...gle.com,
pabeni@...hat.com,
andrew+netdev@...n.ch,
horms@...nel.org,
Jakub Kicinski <kuba@...nel.org>,
Jason Wang <jasowang@...hat.com>,
Zigit Zo <zuozhijie@...edance.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>,
leitao@...ian.org,
sdf@...ichev.me
Subject: [PATCH net v2] netpoll: prevent hanging NAPI when netcons gets enabled
Paolo spotted hangs in NIPA running driver tests against virtio.
The tests hang in virtnet_close() -> virtnet_napi_tx_disable().
The problem is only reproducible if running multiple of our tests
in sequence (I used TEST_PROGS="xdp.py ping.py netcons_basic.sh \
netpoll_basic.py stats.py"). Initial suspicion was that this is
a simple case of double-disable of NAPI, but instrumenting the
code reveals:
Deadlocked on NAPI ffff888007cd82c0 (virtnet_poll_tx):
state: 0x37, disabled: false, owner: 0, listed: false, weight: 64
The NAPI was not in fact disabled, owner is 0 (rather than -1),
so the NAPI "thinks" it's scheduled for CPU 0 but it's not listed
(!list_empty(&n->poll_list) => false). It seems odd that normal NAPI
processing would wedge itself like this.
Better suspicion is that netpoll gets enabled while NAPI is polling,
and also grabs the NAPI instance. This confuses napi_complete_done():
[netpoll] [normal NAPI]
napi_poll()
have = netpoll_poll_lock()
rcu_access_pointer(dev->npinfo)
return NULL # no netpoll
__napi_poll()
->poll(->weight)
poll_napi()
cmpxchg(->poll_owner, -1, cpu)
poll_one_napi()
set_bit(NAPI_STATE_NPSVC, ->state)
napi_complete_done()
if (NAPIF_STATE_NPSVC)
return false
# exit without clearing SCHED
This feels very unlikely, but perhaps virtio has some interactions
with the hypervisor in the NAPI ->poll that makes the race window
larger?
Best I could to to prove the theory was to add and trigger this
warning in napi_poll (just before netpoll_poll_unlock()):
WARN_ONCE(!have && rcu_access_pointer(n->dev->npinfo) &&
napi_is_scheduled(n) && list_empty(&n->poll_list),
"NAPI race with netpoll %px", n);
If this warning hits the next virtio_close() will hang.
This patch survived 30 test iterations without a hang (without it
the longest clean run was around 10). Credit for triggering this
goes to Breno's recent netconsole tests.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: Paolo Abeni <pabeni@...hat.com>
Link: https://lore.kernel.org/c5a93ed1-9abe-4880-a3bb-8d1678018b1d@redhat.com
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
---
v2:
- move the sync to netpoll_setup()
v1: https://lore.kernel.org/20250725024454.690517-1-kuba@kernel.org
CC: Jason Wang <jasowang@...hat.com>
CC: Zigit Zo <zuozhijie@...edance.com>
CC: "Michael S. Tsirkin" <mst@...hat.com>
CC: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
CC: Eugenio Pérez <eperezma@...hat.com>
CC: leitao@...ian.org
CC: sdf@...ichev.me
---
net/core/netpoll.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index a1da97b5b30b..5f65b62346d4 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -768,6 +768,13 @@ int netpoll_setup(struct netpoll *np)
if (err)
goto flush;
rtnl_unlock();
+
+ /* Make sure all NAPI polls which started before dev->npinfo
+ * was visible have exited before we start calling NAPI poll.
+ * NAPI skips locking if dev->npinfo is NULL.
+ */
+ synchronize_rcu();
+
return 0;
flush:
--
2.50.1
Powered by blists - more mailing lists