lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bcff860e-749b-4911-9eba-41b47c00c305@arista.com>
Date: Thu, 23 Oct 2025 15:52:28 -0700
From: Christoph Schwarz <cschwarz@...sta.com>
To: Neal Cardwell <ncardwell@...gle.com>
Cc: edumazet@...gle.com, netdev@...r.kernel.org
Subject: Re: TCP sender stuck despite receiving ACKs from the peer

On 10/3/25 18:24, Neal Cardwell wrote:
[...]
> Thanks for the report!
> 
> A few thoughts:
> 
[...]
> 
> (2) After that, would it be possible to try this test with a newer
> kernel? You mentioned this is with kernel version 5.10.165, but that's
> more than 2.5 years old at this point, and it's possible the bug has
> been fixed since then.  Could you please try this test with the newest
> kernel that is available in your distribution? (If you are forced to
> use 5.10.x on your distribution, note that even with 5.10.x there is
> v5.10.245, which was released yesterday.)
> 
> (3) If this bug is still reproducible with a recent kernel, would it
> be possible to gather .pcap traces from both client and server,
> including SYN and SYN/ACK? Sometimes it can be helpful to see the
> perspective of both ends, especially if there are middleboxes
> manipulating the packets in some way.
> 
> Thanks!
> 
> Best regards,
> neal

Hi,

I want to give an update as we made some progress.

We tried with the 6.12.40 kernel, but it was much harder to reproduce 
and we were not able to do a successful packet capture and reproduction 
at the same time. So we went back to 5.10.165, added more tracing and 
eventually figured out how the TCP connection got into the bad state.

This is a backtrace from the TCP stack calling down to the device driver:
  => fdev_tx    // ndo_start_xmit hook of a proprietary device driver
  => dev_hard_start_xmit
  => sch_direct_xmit
  => __qdisc_run
  => __dev_queue_xmit
  => vlan_dev_hard_start_xmit
  => dev_hard_start_xmit
  => __dev_queue_xmit
  => ip_finish_output2
  => __ip_queue_xmit
  => __tcp_transmit_skb
  => tcp_write_xmit

tcp_write_xmit sends segments of 65160 bytes. Due to an MSS of 1448, 
they get broken down into 45 packets of 1448 bytes each. These 45 
packets eventually reach dev_hard_start_xmit, which is a simple loop 
forwarding packets one by one. When the problem occurs, we see that 
dev_hard_start_xmit transmits the initial N packets successfully, but 
the remaining 45-N ones fail with error code 1. The loop runs to 
completion and does not break.

The error code 1 from dev_hard_start_xmit gets returned through the call 
stack up to tcp_write_xmit, which treats this as error and breaks its 
own loop without advancing snd_nxt:

		if (unlikely(tcp_transmit_skb(sk, skb, 1, gfp)))
			break; // <<< breaks here

repair:
		/* Advance the send_head.  This one is sent out.
		 * This call will increment packets_out.
		 */
		tcp_event_new_data_sent(sk, skb);

 From packet captures we can prove that the 45 packets show up on the 
kernel device on the sender. In addition, the first N of those 45 
packets show up on the kernel device on the peer. The connection is now 
in the problem state where the peer is N packets ahead of the sender and 
the sender thinks that it never those packets, leading to the problem as 
described in my initial mail.

Furthermore, we noticed that the N-45 missing packets show up as drops 
on the sender's kernel device:

vlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
         inet 127.2.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
         [...]
         TX errors 0  dropped 36 overruns 0  carrier 0  collisions 0

This device is a vlan device stacked on another device like this:

49: vlan0@...ent: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default qlen 1000
     link/ether 02:1c:a7:00:00:01 brd ff:ff:ff:ff:ff:ff
3: parent: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 10000 qdisc prio state 
UNKNOWN mode DEFAULT group default qlen 1000
     link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

Eventually packets need to go through the device driver, which has only 
a limited number of TX buffers. The driver implements flow control: when 
it is about to exhaust its buffers, it stops TX by calling 
netif_stop_queue. Once more buffers become available again, it resumes 
TX by calling netif_wake_queue. From packet counters we can tell that 
this is happening frequently.

At this point we suspected "qdisc noqueue" to be a factor, and indeed, 
after adding a queue to vlan0 the problem no longer happened, although 
there are still TX drops on the vlan0 device.

Missing queue or not, we think there is a disconnect between the device 
driver API and the TCP stack. The device driver API only allows 
transmitting packets one by one (ndo_start_xmit). The TCP stack operates 
on larger segments that is breaks down into smaller pieces 
(tcp_write_xmit / __tcp_transmit_skb). This can lead to a classic "short 
write" condition which the network stack doesn't seem to handle well in 
all cases.

Appreciate you comments,
Chris


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ