[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210514130018.GC12395@shell.armlinux.org.uk>
Date: Fri, 14 May 2021 14:00:18 +0100
From: "Russell King (Oracle)" <linux@...linux.org.uk>
To: Stefan Chulski <stefanc@...vell.com>
Cc: Marcin Wojtas <mw@...ihalf.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: mvpp2: incorrect max mtu?
Hi all,
While testing out the 10G speeds on my Macchiatobin platforms, the first
thing I notice is that they only manage about 1Gbps at a MTU of 1500.
As expected, this increases when the MTU is increased - a MTU of 9000
works, and gives a useful performance boost.
Then comes the obvious question - what is the maximum MTU.
#define MVPP2_BM_JUMBO_FRAME_SIZE 10432 /* frame size 9856 */
So, one may assume that 9856 is the maximum. However:
# ip li set dev eth0 mtu 9888
# ip li set dev eth0 mtu 9889
Error: mtu greater than device maximum.
So, the maximum that userspace can set appears to be 9888. If this is
set, then, while running iperf3, we get:
mvpp2 f2000000.ethernet eth0: bad rx status 9202e510 (resource error), size=9888
So clearly this is too large, and we should not be allowing userspace
to set this large a MTU.
At this point, it seems to be impossible to regain the previous speed of
the interface by lowering the MTU. Here is a MTU of 9000:
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.37 MBytes 11.5 Mbits/sec 40 17.5 KBytes
[ 5] 1.00-2.00 sec 1.25 MBytes 10.5 Mbits/sec 39 8.74 KBytes
[ 5] 2.00-3.00 sec 1.13 MBytes 9.45 Mbits/sec 36 17.5 KBytes
[ 5] 3.00-4.00 sec 1.13 MBytes 9.45 Mbits/sec 39 8.74 KBytes
[ 5] 4.00-5.00 sec 1.13 MBytes 9.45 Mbits/sec 36 17.5 KBytes
[ 5] 5.00-6.00 sec 1.28 MBytes 10.7 Mbits/sec 39 8.74 KBytes
[ 5] 6.00-7.00 sec 1.13 MBytes 9.45 Mbits/sec 36 17.5 KBytes
[ 5] 7.00-8.00 sec 1.25 MBytes 10.5 Mbits/sec 39 8.74 KBytes
[ 5] 8.00-9.00 sec 1.13 MBytes 9.45 Mbits/sec 36 17.5 KBytes
[ 5] 9.00-10.00 sec 1.13 MBytes 9.45 Mbits/sec 39 8.74 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.9 MBytes 9.99 Mbits/sec 379 sender
[ 5] 0.00-10.00 sec 11.7 MBytes 9.80 Mbits/sec receiver
Whereas before the test, it was:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 729 MBytes 6.11 Gbits/sec
[ 5] 1.00-2.00 sec 719 MBytes 6.03 Gbits/sec
[ 5] 2.00-3.00 sec 773 MBytes 6.49 Gbits/sec
[ 5] 3.00-4.00 sec 769 MBytes 6.45 Gbits/sec
[ 5] 4.00-5.00 sec 779 MBytes 6.54 Gbits/sec
[ 5] 5.00-6.00 sec 784 MBytes 6.58 Gbits/sec
[ 5] 6.00-7.00 sec 777 MBytes 6.52 Gbits/sec
[ 5] 7.00-8.00 sec 774 MBytes 6.50 Gbits/sec
[ 5] 8.00-9.00 sec 769 MBytes 6.45 Gbits/sec
[ 5] 9.00-10.00 sec 774 MBytes 6.49 Gbits/sec
[ 5] 10.00-10.00 sec 3.07 MBytes 5.37 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 7.47 GBytes 6.41 Gbits/sec receiver
(this is on the server end of iperf3, the others are the client end,
but the results were pretty very similar to that.)
So, clearly something bad has happened to the buffer management as a
result of raising the MTU so high.
As the end which has suffered this issue is the mcbin VM host, I'm not
currently in a position I can reboot it without cause major disruption
to my network. However, thoughts on this (and... can others reproduce
it) would be useful.
Thanks.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
Powered by blists - more mailing lists