lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200612115250.GS11869@pengutronix.de>
Date:   Fri, 12 Jun 2020 13:52:50 +0200
From:   Sascha Hauer <s.hauer@...gutronix.de>
To:     Russell King - ARM Linux admin <linux@...linux.org.uk>
Cc:     linux-arm-kernel@...ts.infradead.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        kernel@...gutronix.de
Subject: Re: [PATCH v2] net: mvneta: Fix Serdes configuration for 2.5Gbps
 modes

On Fri, Jun 12, 2020 at 12:30:31PM +0100, Russell King - ARM Linux admin wrote:
> On Fri, Jun 12, 2020 at 12:22:13PM +0100, Russell King - ARM Linux admin wrote:
> > On Fri, Jun 12, 2020 at 11:42:08AM +0100, Russell King - ARM Linux admin wrote:
> > > With the obvious mistakes fixed (extraneous 'i' and lack of default
> > > case), it seems to still work on Armada 388 Clearfog Pro with 2.5G
> > > modules.
> > 
> > ... and the other bug fixed - mvneta_comphy_init() needs to be passed
> > the interface mode.
> 
> Unrelated to the patch, has anyone noticed that mvneta's performance
> seems to have reduced?  I've only just noticed it (which makes 2.5Gbps
> rather pointless).  This is iperf between two clearfogs with a 2.5G
> fibre link:
> 
> root@...arfog21:~# iperf -V -c fe80::250:43ff:fe02:303%eno2
> ------------------------------------------------------------
> Client connecting to fe80::250:43ff:fe02:303%eno2, TCP port 5001
> TCP window size: 43.8 KByte (default)
> ------------------------------------------------------------
> [  3] local fe80::250:43ff:fe21:203 port 48928 connected with fe80::250:43ff:fe02:303 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec   553 MBytes   464 Mbits/sec
> 
> I checked with Jon Nettleton, and he confirms my recollection that
> mvneta on Armada 388 used to be able to fill a 2.5Gbps link.
> 
> If Armada 388 can't manage, then I suspect Armada XP will have worse
> performance being an earlier revision SoC.

I only have one board with a Armada XP here which has a loopback cable
between two ports. It gives me:

[  3] local 172.16.1.4 port 47002 connected with 172.16.1.0 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.27 GBytes  1.09 Gbits/sec

Still not 2.5Gbps, but at least twice the data rate you get, plus my
board has to handle both ends of the link.

Sascha

-- 
Pengutronix e.K.                           |                             |
Steuerwalder Str. 21                       | http://www.pengutronix.de/  |
31137 Hildesheim, Germany                  | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ