[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <494cd993-aa45-ff11-8d76-f2233fcf7295@kontron.de>
Date: Mon, 17 May 2021 09:17:12 +0200
From: Frieder Schrempf <frieder.schrempf@...tron.de>
To: Joakim Zhang <qiangqing.zhang@....com>,
dl-linux-imx <linux-imx@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
Hi Joakim,
On 13.05.21 14:36, Joakim Zhang wrote:
>
> Hi Frieder,
>
> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on L5.10, and can't reproduce on L5.4.
> According to your description, you can reproduce this issue both L5.4 and L5.10? So I need confirm with you.
Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both kernels were official mainline kernels and **not** from the linux-imx downstream tree.
Maybe there is some problem in the mainline tree and it got included in the NXP release kernel starting from L5.10?
Best regards
Frieder
>
> Best Regards,
> Joakim Zhang
>
>> -----Original Message-----
>> From: Joakim Zhang <qiangqing.zhang@....com>
>> Sent: 2021年5月12日 19:59
>> To: Frieder Schrempf <frieder.schrempf@...tron.de>; dl-linux-imx
>> <linux-imx@....com>; netdev@...r.kernel.org;
>> linux-arm-kernel@...ts.infradead.org
>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>>
>> Hi Frieder,
>>
>> Sorry, I missed this mail before, I can reproduce this issue at my side, I will try
>> my best to look into this issue.
>>
>> Best Regards,
>> Joakim Zhang
>>
>>> -----Original Message-----
>>> From: Frieder Schrempf <frieder.schrempf@...tron.de>
>>> Sent: 2021年5月6日 22:46
>>> To: dl-linux-imx <linux-imx@....com>; netdev@...r.kernel.org;
>>> linux-arm-kernel@...ts.infradead.org
>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>
>>> Hi,
>>>
>>> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini
>>> boards. It happens quite often that the measured bandwidth in TX
>>> direction drops from its expected/nominal value to something like 50%
>>> (for 100M) or ~67% (for 1G) connections.
>>>
>>> So far we reproduced this with two different hardware designs using
>>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
>>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>>>
>>> To measure the throughput we simply run iperf3 on the target (with a
>>> short p2p connection to the host PC) like this:
>>>
>>> iperf3 -c 192.168.1.10 --bidir
>>>
>>> But even something more simple like this can be used to get the info
>>> (with 'nc -l -p 1122 > /dev/null' running on the host):
>>>
>>> dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>>
>>> The results fluctuate between each test run and are sometimes 'good' (e.g.
>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M
>> link).
>>> There is nothing else running on the system in parallel. Some more
>>> info is also available in this post: [1].
>>>
>>> If there's anyone around who has an idea on what might be the reason
>>> for this, please let me know!
>>> Or maybe someone would be willing to do a quick test on his own hardware.
>>> That would also be highly appreciated!
>>>
>>> Thanks and best regards
>>> Frieder
>>>
>>> [1]:
>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcomm
>>> u
>>>
>> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
>>>
>> Fluctuations%2Fm-p%2F1242467%23M170563&data=04%7C01%7Cqiang
>>>
>> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
>>>
>> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>>>
>> WwiLCJXVCI6Mn0%3D%7C1000&sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
>>> YSxakXwZtxde8%3D&reserved=0
Powered by blists - more mailing lists