lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sef4e8m7.fsf@toke.dk>
Date: Mon, 27 Oct 2025 15:12:48 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, netdev@...r.kernel.org,
 makita.toshiaki@....ntt.co.jp
Cc: Eric Dumazet <eric.dumazet@...il.com>, "David S. Miller"
 <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
 <pabeni@...hat.com>, ihor.solodrai@...ux.dev, toshiaki.makita1@...il.com,
 bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
 kernel-team@...udflare.com
Subject: Re: [PATCH net V1 3/3] veth: more robust handing of race to avoid
 txq getting stuck

Jesper Dangaard Brouer <hawk@...nel.org> writes:

> On 24/10/2025 16.33, Toke Høiland-Jørgensen wrote:
>> Jesper Dangaard Brouer <hawk@...nel.org> writes:
>> 
>>> Commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to
>>> reduce TX drops") introduced a race condition that can lead to a permanently
>>> stalled TXQ. This was observed in production on ARM64 systems (Ampere Altra
>>> Max).
>>>
>>> The race occurs in veth_xmit(). The producer observes a full ptr_ring and
>>> stops the queue (netif_tx_stop_queue()). The subsequent conditional logic,
>>> intended to re-wake the queue if the consumer had just emptied it (if
>>> (__ptr_ring_empty(...)) netif_tx_wake_queue()), can fail. This leads to a
>>> "lost wakeup" where the TXQ remains stopped (QUEUE_STATE_DRV_XOFF) and
>>> traffic halts.
>>>
>>> This failure is caused by an incorrect use of the __ptr_ring_empty() API
>>> from the producer side. As noted in kernel comments, this check is not
>>> guaranteed to be correct if a consumer is operating on another CPU. The
>>> empty test is based on ptr_ring->consumer_head, making it reliable only for
>>> the consumer. Using this check from the producer side is fundamentally racy.
>>>
>>> This patch fixes the race by adopting the more robust logic from an earlier
>>> version V4 of the patchset, which always flushed the peer:
>>>
>>> (1) In veth_xmit(), the racy conditional wake-up logic and its memory barrier
>>> are removed. Instead, after stopping the queue, we unconditionally call
>>> __veth_xdp_flush(rq). This guarantees that the NAPI consumer is scheduled,
>>> making it solely responsible for re-waking the TXQ.
>> 
>> This makes sense.
>> 
>>> (2) On the consumer side, the logic for waking the peer TXQ is centralized.
>>> It is moved out of veth_xdp_rcv() (which processes a batch) and placed at
>>> the end of the veth_poll() function. This ensures netif_tx_wake_queue() is
>>> called once per complete NAPI poll cycle.
>> 
>> So is this second point strictly necessary to fix the race, or is it
>> more of an optimisation?
>> 
>
> IMHO it is strictly necessary to fix the race.  The wakeup check
> netif_tx_queue_stopped() in veth_poll() needs to be after the code that
> (potentially) writes rx_notify_masked.
>
> This handles the race where veth_xmit() haven't called
> netif_tx_stop_queue() yet, but veth_poll() manage to consume all packets
> and stopped NAPI.  Then we know that __veth_xdp_flush(rq) in veth_xmit()
> will see rx_notify_masked==false and start NAPI/veth_poll() again, and
> even-though there is no packets left to process we still hit the check
> netif_tx_queue_stopped() which start txq and will allow veth_xmit() to
> run again.
>
> I'll see if I can improve the description for (2).

Right, okay. Yes, adding this reasoning to the commit message would be
good :)

-Toke


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ