lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4930C763-C75F-430A-B26C-60451E629B09@bejarano.io>
Date: Mon, 26 May 2025 13:47:58 +0200
From: Ricard Bejarano <ricard@...arano.io>
To: Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc: netdev@...r.kernel.org,
 michael.jamet@...el.com,
 YehezkelShB@...il.com,
 andrew+netdev@...n.ch,
 davem@...emloft.net,
 edumazet@...gle.com,
 kuba@...nel.org,
 pabeni@...hat.com
Subject: Re: Poor thunderbolt-net interface performance when bridged

> Simple peer-to-peer, no routing nothing. Anything else is making things
> hard to debug. Also note that this whole thing is supposed to be used as
> peer-to-peer not some full fledged networking solution.

> Let's forget bridges for now and anything else than this:
>  Host A <- Thunderbolt Cable -> Host B

Right, but that's precisely what I'm digging into: red->blue runs at line speed,
and so does blue->purple. From what I understand about drivers and networking,
it doesn't make sense then that the red->blue->purple path drops down so much in
performance (9Gbps to 5Mbps), especially when the reverse purple->blue->red path
runs at ~930Mbps (which lines up with the purple->blue link's speed).

> So instead of 40 Gb/s with lane bonding you get 10 Gb/s (although there are
> some limitations in the DMA side so you don't get the full 40 Gb/s but
> certainly more than what the 10 Gb/s single lane gives you).

Right, but I'm getting 5Mbps, with an M.
That's 1800x times slower than the 9Gbps I get on the other way around on direct
(non-forwarded, non-bridged) traffic. I'm sure I don't have hardware for 40Gbps,
but if I'm getting ~9Gbps one way, why am I not getting similar performance the
other way.

It's not the absolute performance that bugs me, but the massive assymmetry in
both ways of the very same ports and cable.

> You missed the attachment?

Yup, sorry. I've attached both the original tblist output and the reversed one.

> Well, if the link is degraded to 10 Gb/s then I'm not sure there is nothing
> more I can do here.

This I don't understand.

I will see what I can do with the 12/13th NUCs which are TB4 and have certified
cables.

Thanks, once again,
Ricard Bejarano


View attachment "tblist-flipped.txt" of type "text/plain" (456 bytes)

View attachment "tblist.txt" of type "text/plain" (470 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ