[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250526045004.GL88033@black.fi.intel.com>
Date: Mon, 26 May 2025 07:50:04 +0300
From: Mika Westerberg <mika.westerberg@...ux.intel.com>
To: Ricard Bejarano <ricard@...arano.io>
Cc: netdev@...r.kernel.org, michael.jamet@...el.com, YehezkelShB@...il.com,
andrew+netdev@...n.ch, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com
Subject: Re: Poor thunderbolt-net interface performance when bridged
Hi,
On Fri, May 23, 2025 at 05:07:02PM +0200, Ricard Bejarano wrote:
> > What is the performance without bridging?
>
> I actually tested this as soon as I sent my original message. Interestingly
> enough, performance without bridging is about the same: ~930Mbps in the
> purple->red direction, ~5Mbps in red->purple.
>
> I also tested running eBPF/XDP programs attached to both eno1 and tb0 to
> immediately XDP_REDIRECT to each other. This worked, as confirmed by successful
> ping/iperf even after bringing br0 down, and I could see the XDP program
> invocation counts growing in 'bpftool prog list'.
> But all I got was maybe (IMO falls within measurement error margin) a ~1Mbps
> average increase in throughput in the red->purple direction.
> But I guess we've now isolated the problem out of the bridge completely, right?
>
> As instructured, I've attached the full 'dmesg' output after setting the
> 'thunderbolt.dyndbg=+p' kernel command line flag.
Thanks for the logs. See below my analysis.
> [ 4.144711] thunderbolt 0000:04:00.0: using firmware connection manager
This means the tunnels are controlled by firmware not the kernel driver.
E.g this is an older non-USB4 system. The firmware connection manager does
not support lane bonding whic means your link only can use the 20 Gb/s
single lane. However, there is even more to this:
> [ 5.497037] thunderbolt 0-1: current link speed 10.0 Gb/s
> [ 5.497049] thunderbolt 0-1: current link width symmetric, single lane
This one shows that the link was trained only to gen2. That's instead of 20
Gb/s you get only 10 Gb/s. Now since this if firmware the driver only logs
these but I suggest to check this by running:
# tblist -Av
You can get tbtools here [1].
Reason for this typically is bad cable. The ones that has the small
ligthning logo should work the best. If you use something else then the
link may get degraded. You can check the negotiated link speed running the
above command. I think this explains why you see the "low" throughput. Hope
this helps.
[1] https://github.com/intel/tbtools/wiki/Useful-Commands#list-all-devices-including-other-hosts-and-retimers
Powered by blists - more mailing lists