[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <11b93504-c0a1-4a2a-9061-034e92f84bb4@kernel.org>
Date: Wed, 5 Nov 2025 16:54:45 +0100
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
Toke Høiland-Jørgensen <toke@...e.dk>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
"David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
ihor.solodrai@...ux.dev, "Michael S. Tsirkin" <mst@...hat.com>,
makita.toshiaki@....ntt.co.jp, toshiaki.makita1@...il.com,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kernel-team@...udflare.com
Subject: Re: [PATCH net V2 2/2] veth: more robust handing of race to avoid txq
getting stuck
On 30/10/2025 13.28, Paolo Abeni wrote:
> On 10/27/25 9:05 PM, Jesper Dangaard Brouer wrote:
>> (3) Finally, the NAPI completion check in veth_poll() is updated. If NAPI is
>> about to complete (napi_complete_done), it now also checks if the peer TXQ
>> is stopped. If the ring is empty but the peer TXQ is stopped, NAPI will
>> reschedule itself. This prevents a new race where the producer stops the
>> queue just as the consumer is finishing its poll, ensuring the wakeup is not
>> missed.
>
> [...]
>
>> @@ -986,7 +979,8 @@ static int veth_poll(struct napi_struct *napi, int budget)
>> if (done < budget && napi_complete_done(napi, done)) {
>> /* Write rx_notify_masked before reading ptr_ring */
>> smp_store_mb(rq->rx_notify_masked, false);
>> - if (unlikely(!__ptr_ring_empty(&rq->xdp_ring))) {
>> + if (unlikely(!__ptr_ring_empty(&rq->xdp_ring) ||
>> + (peer_txq && netif_tx_queue_stopped(peer_txq)))) {
>> if (napi_schedule_prep(&rq->xdp_napi)) {
>> WRITE_ONCE(rq->rx_notify_masked, true);
>> __napi_schedule(&rq->xdp_napi);
>
> Double checking I'm read the code correctly. The above is supposed to
> trigger when something alike the following happens
>
> [producer] [consumer]
> veth_poll()
> [ring empty]
> veth_xmit
> veth_forward_skb
> [NETDEV_TX_BUSY]
> napi_complete_done()
>
> netif_tx_stop_queue
> __veth_xdp_flush()
> rq->rx_notify_masked == true
> WRITE_ONCE(rq->rx_notify_masked,
> false);
>
> ?
>
> I think the above can't happen, the producer should need to fill the
> whole ring in-between the ring check and napi_complete_done().
The race I can see is slightly different. It is centered around the
consumer manage to empty the ring after [NETDEV_TX_BUSY].
We have 256 packets in queue and I observe NAPI packet processing time
of 7.64 usec on a given ARM64 metal. This means it takes 1956 usec or
1.96 ms to empty the queue (which is the time needed for the race to
occur in below during "(something interrupts)").
It would look like this:
[producer] [consumer]
veth_poll() - already running
veth_xmit
veth_forward_skb
[ring full]
[NETDEV_TX_BUSY]
(something interrupts)
veth_poll()
manage to [empty ring]
napi_complete_done()
netif_tx_stop_queue
__veth_xdp_flush()
- No effect of flush as:
- rq->rx_notify_masked == true
WRITE_ONCE(rq->rx_notify_masked, false)
[empty ring] don't restart NAPI
Observe netif_tx_queue_stopped == true
Notice: at end (the consumer) do observe netif_tx_queue_stopped is true.
This is leveraged in the patch by moving the netif_tx_queue_stopped
check to the end of veth_poll(). This now happens after rx_notify_masked
is changed to false, which is the race fix.
Other cases where veth_poll() stop NAPI and exits, is recovered by
__veth_xdp_flush() in producer.
--Jesper
Powered by blists - more mailing lists