lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aLYAKG2Aw5t7GKtu@shredder>
Date: Mon, 1 Sep 2025 23:20:56 +0300
From: Ido Schimmel <idosch@...sch.org>
To: Ricard Bejarano <ricard@...arano.io>
Cc: Andrew Lunn <andrew@...n.ch>,
	Mika Westerberg <mika.westerberg@...ux.intel.com>,
	netdev@...r.kernel.org, michael.jamet@...el.com,
	YehezkelShB@...il.com, andrew+netdev@...n.ch, davem@...emloft.net,
	edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com
Subject: Re: Poor thunderbolt-net interface performance when bridged

On Thu, Aug 28, 2025 at 09:59:25AM +0200, Ricard Bejarano wrote:
> Anything we could further test?

Disclaimer:
I am not familiar with thunderbolt and tbnet.

tl;dr:
Can you try disabling TSO on your thunderbolt devices and see if it
helps? Like so:
# ethtool -K tb0 tcp-segmentation-offload off

Details:
The driver advertises support for TSO but admits that it's not
implementing it correctly:

"ThunderboltIP takes advantage of TSO packets but instead of segmenting
them we just split the packet into Thunderbolt frames (maximum payload
size of each frame is 4084 bytes) and calculate checksum over the whole
packet here.

The receiving side does the opposite if the host OS supports LRO,
otherwise it needs to split the large packet into MTU sized smaller
packets."

So, what I *think* ends up happening is that the receiver (blue)
receives large TCP packets from tbnet instead of MTU sized TCP packets.
This might be OK for locally received traffic, but not for forwarded
traffic.

The bridge/router/whatever on blue will try to forward the oversized
packets towards purple and drop them because they exceed the size of the
MTU of your Ethernet interface (1500).

The above can explain why you only see it with TCP and only when
forwarding from tbnet to regular Ethernet devices and not in the other
direction.

You can try to start recording packet drops on blue *before* running the
iperf3 test:

# perf record -a -g -e skb:kfree_skb

And then view the traces with "perf script". If the above theory is
correct (and it might not be), you should see the drops in
br_dev_queue_push_xmit().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ