[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALjTZva9+ufCR5+QhJXL+7CHDRJVLQqb4uPwumEO5BqssGKPMw@mail.gmail.com>
Date: Sat, 1 Mar 2025 11:45:37 +0000
From: Rui Salvaterra <rsalvaterra@...il.com>
To: Heiner Kallweit <hkallweit1@...il.com>
Cc: nic_swsd@...ltek.com, netdev@...r.kernel.org
Subject: Re: [PATCH] r8169: add support for 16K jumbo frames on RTL8125B
Hi, Heiner,
On Fri, 28 Feb 2025 at 20:22, Heiner Kallweit <hkallweit1@...il.com> wrote:
>
> This has been proposed and discussed before. Decision was to not increase
> the max jumbo packet size, as vendor drivers r8125/r8126 also support max 9k.
I did a cursory search around the mailing list, but didn't find
anything specific. Maybe I didn't look hard enough. However…
> And in general it's not clear whether you would gain anything from jumbo packets,
> because hw TSO and c'summing aren't supported for jumbo packets.
… I actually have numbers to justify it. For my use case, jumbo frames
make a *huge* difference. I have an Atom 330-based file server, this
CPU is too slow to saturate the link with a MTU of 1500 bytes. The
situation, however, changes dramatically when I use jumbo frames. Case
in point…
MTU = 1500 bytes:
Accepted connection from 192.168.17.20, port 55514
[ 5] local 192.168.17.16 port 5201 connected to 192.168.17.20 port 55524
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 241 MBytes 2.02 Gbits/sec
[ 5] 1.00-2.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 2.00-3.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 3.00-4.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 4.00-5.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 5.00-6.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 6.00-7.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 7.00-8.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 8.00-9.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 9.00-10.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 10.00-10.00 sec 128 KBytes 1.27 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 2.36 GBytes 2.03 Gbits/sec receiver
MTU = 9000 bytes:
Accepted connection from 192.168.17.20, port 53474
[ 5] local 192.168.17.16 port 5201 connected to 192.168.17.20 port 53490
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 1.00-2.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 2.00-3.00 sec 294 MBytes 2.47 Gbits/sec
[ 5] 3.00-4.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 4.00-5.00 sec 294 MBytes 2.47 Gbits/sec
[ 5] 5.00-6.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 6.00-7.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 7.00-8.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 8.00-9.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 9.00-10.00 sec 295 MBytes 2.47 Gbits/sec
[ 5] 10.00-10.00 sec 384 KBytes 2.38 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 2.88 GBytes 2.47 Gbits/sec receiver
MTU = 12000 bytes (with my patch):
Accepted connection from 192.168.17.20, port 59378
[ 5] local 192.168.17.16 port 5201 connected to 192.168.17.20 port 59388
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 1.00-2.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 2.00-3.00 sec 295 MBytes 2.48 Gbits/sec
[ 5] 3.00-4.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 4.00-5.00 sec 295 MBytes 2.48 Gbits/sec
[ 5] 5.00-6.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 6.00-7.00 sec 295 MBytes 2.48 Gbits/sec
[ 5] 7.00-8.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 8.00-9.00 sec 296 MBytes 2.48 Gbits/sec
[ 5] 9.00-10.00 sec 294 MBytes 2.47 Gbits/sec
[ 5] 10.00-10.00 sec 512 KBytes 2.49 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 2.89 GBytes 2.48 Gbits/sec receiver
This demonstrates that the bottleneck is in the frame processing. With
a larger frame size, the number of checksum calculations is also
lower, for the same amount of payload data, and the CPU is able to
handle them.
Kind regards,
Rui Salvaterra
Powered by blists - more mailing lists