lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <69080E6E-C5EF-436B-92F0-610183C5ABC0@bejarano.io>
Date: Tue, 27 May 2025 14:36:11 +0200
From: Ricard Bejarano <ricard@...arano.io>
To: Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc: netdev@...r.kernel.org,
 michael.jamet@...el.com,
 YehezkelShB@...il.com,
 andrew+netdev@...n.ch,
 davem@...emloft.net,
 edumazet@...gle.com,
 kuba@...nel.org,
 pabeni@...hat.com
Subject: Re: Poor thunderbolt-net interface performance when bridged

Thanks for the hint.

I've made a test with end-to-end flow control disabled, just in case:

root@red:~# dmesg | grep Command
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.14.7 root=/dev/mapper/ubuntu--vg-ubuntu--lv ro thunderbolt.dyndbg=+p thunderbolt_net.e2e=0
root@red:~#

root@...e:~# dmesg | grep Command
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.14.7 root=/dev/mapper/ubuntu--vg-ubuntu--lv ro thunderbolt.dyndbg=+p thunderbolt_net.e2e=0
root@...e:~#

Here's iperf3:

root@red:~# iperf3 -c 10.0.0.2 -u -b 1100M -t 5  # blue
Connecting to host 10.0.0.2, port 5201
[  5] local 10.0.0.1 port 60610 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   131 MBytes  1.10 Gbits/sec  94896
[  5]   1.00-2.00   sec   131 MBytes  1.10 Gbits/sec  94958
[  5]   2.00-3.00   sec   131 MBytes  1.10 Gbits/sec  94960
[  5]   3.00-4.00   sec   131 MBytes  1.10 Gbits/sec  94958
[  5]   4.00-5.00   sec   131 MBytes  1.10 Gbits/sec  94958
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-5.00   sec   656 MBytes  1.10 Gbits/sec  0.000 ms  0/474730 (0%)  sender
[  5]   0.00-5.00   sec   601 MBytes  1.01 Gbits/sec  0.005 ms  39672/474730 (8.4%)  receiver
root@red:~#

And here are the interface stat diffs:

1) red's br0 (10.0.0.1)
    RX:  bytes packets errors dropped  missed   mcast
         +1080     +15      -       -       -       -
    TX:  bytes packets errors dropped carrier collsns
    +707349348 +474748      -       -       -       -

2) red's tb0
    RX:  bytes packets errors dropped  missed   mcast
         +1290     +15      -       -       -       -
    TX:  bytes packets errors dropped carrier collsns
    +707349348 +474748      -       -       -       -

3) blue's tb0
    RX:  bytes packets errors dropped  missed   mcast
    +701384878 +470745  +1088       -       -       -
    TX:  bytes packets errors dropped carrier collsns
         +1290     +15      -       -       -       -

4) blue's br0 (10.0.0.2)
    RX:  bytes packets errors dropped  missed   mcast
    +694794448 +470745      -       -       -       -
    TX:  bytes packets errors dropped carrier collsns
         +1290     +15      -       -       -       -

We have lost 4003 packets and have 1088 errors at blue's tb0 rx side.
I still don't know why iperf3 reports 39672 lost datagrams.

In any case, from rerunning the various tests, it doesn't seem like disabling
end-to-end flow control has much of an impact on overall loss and throughput.

Thanks,
Ricard Bejarano


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ