[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8ec64936-c8fa-1f0e-68bf-2ad1d6e8f5d9@gmx.de>
Date: Sun, 24 Feb 2019 16:00:21 +0100
From: Simon Huelck <simonmail@....de>
To: Jerome Brunet <jbrunet@...libre.com>,
Jose Abreu <jose.abreu@...opsys.com>,
Martin Blumenstingl <martin.blumenstingl@...glemail.com>
Cc: linux-amlogic@...ts.infradead.org, Gpeppe.cavallaro@...com,
alexandre.torgue@...com,
Emiliano Ingrassia <ingrassia@...genesys.com>,
netdev@...r.kernel.org
Subject: Re: stmmac / meson8b-dwmac
Am 21.02.2019 um 18:46 schrieb Jerome Brunet:
> On Thu, 2019-02-21 at 18:27 +0100, Simon Huelck wrote:
>> Hi,
>>
>>
>>
>> this was changed recently, with a patch for the EEE stuff , see here:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v5.0-rc7&id=e35e26b26e955c53e61c154ba26b9bb15da6b858
> Hu, I was not aware this finally went through. Good !
> As explained in the patch and by Jose, the GMAC should be using IRQ_LEVEL.
>
> The realtek PHY has EEE enabled by default. Having this enabled generates a
> lot of (Low Power) Interrupts.
>
> Previously, when the GMAC used IRQ_EDGE. Because it is wrong, we would
> eventually miss an IRQ and the interface would just die. Unfortunately, it was
> not that easy find out.
>
> 2 years ago, we just noticed that disabling EEE would make the failure go
> away. Forcing this EEE feature off through DT was merely a work around.
>
> Now that the real cause of the problem is known, there is no reason to keep
> this hack around.
>
> Whether EEE adds a performance penality and why, is another topic.
> As Jose pointed out, you can disable EEE at runtime, using ethtool.
>
> Jerome
>
Hi,
i tested the latest patches of **next-20190222, there were some stmmac
improvements.**
**
**
**For the topic i got , the performance stayed identical.**
**C:\Users\Simon\Downloads\iperf3.6_64bit\iperf3.6_64bit>iperf3.exe -c
10.10.11.1 -i1
warning: Ignoring nonsense TCP MSS 0
Connecting to host 10.10.11.1, port 5201
[ 5] local 10.10.11.100 port 50830 connected to 10.10.11.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 78.9 MBytes 661 Mbits/sec
[ 5] 1.00-2.00 sec 79.1 MBytes 664 Mbits/sec
[ 5] 2.00-3.00 sec 79.4 MBytes 666 Mbits/sec
[ 5] 3.00-4.00 sec 34.4 MBytes 288 Mbits/sec
[ 5] 4.00-5.00 sec 16.1 MBytes 135 Mbits/sec
[ 5] 5.00-6.00 sec 15.8 MBytes 132 Mbits/sec
[ 5] 6.00-7.00 sec 14.2 MBytes 120 Mbits/sec
[ 5] 7.00-8.00 sec 15.6 MBytes 131 Mbits/sec
[ 5] 8.00-9.00 sec 14.9 MBytes 125 Mbits/sec
[ 5] 9.00-10.00 sec 15.0 MBytes 126 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 363 MBytes 305 Mbits/sec sender
[ 5] 0.00-10.04 sec 363 MBytes 303 Mbits/sec
receiver
**
**its clearly visible when i activated the other stream for getting
duplex load ... The highest rate also stays alot under the possible
930MBits that i have seen earlier with 4.14.
**
**
**
**the parallel stream reached around 450Mbits , which almost sums up to
660Mbits. This is what i meant when i said that duplex might be broken.
**
**
**
**Connecting to host 10.10.11.100, port 5201
[ 5] local 10.10.11.1 port 38658 connected to 10.10.11.100 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 62.9 MBytes 528 Mbits/sec 0 65.6 KBytes
[ 5] 1.00-2.00 sec 56.9 MBytes 477 Mbits/sec 0 65.6 KBytes
[ 5] 2.00-3.00 sec 55.9 MBytes 469 Mbits/sec 0 65.6 KBytes
[ 5] 3.00-4.00 sec 53.0 MBytes 445 Mbits/sec 0 65.6 KBytes
[ 5] 4.00-5.00 sec 54.3 MBytes 455 Mbits/sec 0 65.6 KBytes
[ 5] 5.00-6.00 sec 54.8 MBytes 460 Mbits/sec 0 65.6 KBytes
[ 5] 6.00-7.00 sec 45.3 MBytes 380 Mbits/sec 0 65.6 KBytes
[ 5] 7.00-8.00 sec 51.2 MBytes 429 Mbits/sec 0 65.6 KBytes
[ 5] 8.00-9.00 sec 56.1 MBytes 470 Mbits/sec 0 65.6 KBytes
[ 5] 9.00-10.00 sec 55.3 MBytes 464 Mbits/sec 0 65.6 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 546 MBytes 458 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 545 MBytes 457 Mbits/sec
receiver**
**
**
**regards,**
**Simon
**
**
**
Powered by blists - more mailing lists