[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <353118D9-E9FF-4718-A33A-54155C170693@bejarano.io>
Date: Fri, 23 May 2025 17:07:02 +0200
From: Ricard Bejarano <ricard@...arano.io>
To: Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc: netdev@...r.kernel.org,
michael.jamet@...el.com,
YehezkelShB@...il.com,
andrew+netdev@...n.ch,
davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com
Subject: Re: Poor thunderbolt-net interface performance when bridged
Hey, thank you very much for your answer.
Before responding, I've attached the 'perf top' outputs for blue's CPU #2 (which
I've made to be the one that handles the interrupts of both eno1 and tb0). They
don't point to anything conclusive, from what I can read.
> What is the performance without bridging?
I actually tested this as soon as I sent my original message. Interestingly
enough, performance without bridging is about the same: ~930Mbps in the
purple->red direction, ~5Mbps in red->purple.
I also tested running eBPF/XDP programs attached to both eno1 and tb0 to
immediately XDP_REDIRECT to each other. This worked, as confirmed by successful
ping/iperf even after bringing br0 down, and I could see the XDP program
invocation counts growing in 'bpftool prog list'.
But all I got was maybe (IMO falls within measurement error margin) a ~1Mbps
average increase in throughput in the red->purple direction.
But I guess we've now isolated the problem out of the bridge completely, right?
As instructured, I've attached the full 'dmesg' output after setting the
'thunderbolt.dyndbg=+p' kernel command line flag.
Happy to provide whatever else you need.
Thanks again,
Ricard Bejarano
View attachment "perf-top-idle.txt" of type "text/plain" (2802 bytes)
View attachment "perf-top-fast.txt" of type "text/plain" (2744 bytes)
View attachment "perf-top-slow.txt" of type "text/plain" (2768 bytes)
View attachment "dmesg.txt" of type "text/plain" (88401 bytes)
Powered by blists - more mailing lists