[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB8PR04MB6795A2B708CED18F77EEE0A7E62D9@DB8PR04MB6795.eurprd04.prod.outlook.com>
Date: Mon, 17 May 2021 10:22:15 +0000
From: Joakim Zhang <qiangqing.zhang@....com>
To: Frieder Schrempf <frieder.schrempf@...tron.de>,
dl-linux-imx <linux-imx@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
Hi Frieder,
> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> Sent: 2021年5月17日 15:17
> To: Joakim Zhang <qiangqing.zhang@....com>; dl-linux-imx
> <linux-imx@....com>; netdev@...r.kernel.org;
> linux-arm-kernel@...ts.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>
> Hi Joakim,
>
> On 13.05.21 14:36, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> > For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on
> L5.10, and can't reproduce on L5.4.
> > According to your description, you can reproduce this issue both L5.4 and
> L5.10? So I need confirm with you.
>
> Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both
> kernels were official mainline kernels and **not** from the linux-imx
> downstream tree.
Ok.
> Maybe there is some problem in the mainline tree and it got included in the
> NXP release kernel starting from L5.10?
No, this much looks like a known issue, it should always exist after adding AVB support in mainline.
ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for best effort. And the queue 1&2 is for AVB stream whose default bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps). When transmitting packets, net core will select queues randomly, which caused the tx bandwidth fluctuations. So you can change to use single queue if you care more about tx bandwidth. Or you can refer to NXP internal implementation.
e.g.
--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
@@ -916,8 +916,8 @@
<&clk IMX8MQ_CLK_ENET_PHY_REF>;
clock-names = "ipg", "ahb", "ptp",
"enet_clk_ref", "enet_out";
- fsl,num-tx-queues = <3>;
- fsl,num-rx-queues = <3>;
+ fsl,num-tx-queues = <1>;
+ fsl,num-rx-queues = <1>;
status = "disabled";
};
};
I hope this can help you :)
Best Regards,
Joakim Zhang
> Best regards
> Frieder
>
> >
> > Best Regards,
> > Joakim Zhang
> >
> >> -----Original Message-----
> >> From: Joakim Zhang <qiangqing.zhang@....com>
> >> Sent: 2021年5月12日 19:59
> >> To: Frieder Schrempf <frieder.schrempf@...tron.de>; dl-linux-imx
> >> <linux-imx@....com>; netdev@...r.kernel.org;
> >> linux-arm-kernel@...ts.infradead.org
> >> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >>
> >> Hi Frieder,
> >>
> >> Sorry, I missed this mail before, I can reproduce this issue at my
> >> side, I will try my best to look into this issue.
> >>
> >> Best Regards,
> >> Joakim Zhang
> >>
> >>> -----Original Message-----
> >>> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> >>> Sent: 2021年5月6日 22:46
> >>> To: dl-linux-imx <linux-imx@....com>; netdev@...r.kernel.org;
> >>> linux-arm-kernel@...ts.infradead.org
> >>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>
> >>> Hi,
> >>>
> >>> we observed some weird phenomenon with the Ethernet on our
> >>> i.MX8M-Mini boards. It happens quite often that the measured
> >>> bandwidth in TX direction drops from its expected/nominal value to
> >>> something like 50% (for 100M) or ~67% (for 1G) connections.
> >>>
> >>> So far we reproduced this with two different hardware designs using
> >>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
> >>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >>>
> >>> To measure the throughput we simply run iperf3 on the target (with a
> >>> short p2p connection to the host PC) like this:
> >>>
> >>> iperf3 -c 192.168.1.10 --bidir
> >>>
> >>> But even something more simple like this can be used to get the info
> >>> (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>>
> >>> dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>>
> >>> The results fluctuate between each test run and are sometimes 'good'
> (e.g.
> >>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for
> >>> 100M
> >> link).
> >>> There is nothing else running on the system in parallel. Some more
> >>> info is also available in this post: [1].
> >>>
> >>> If there's anyone around who has an idea on what might be the reason
> >>> for this, please let me know!
> >>> Or maybe someone would be willing to do a quick test on his own
> hardware.
> >>> That would also be highly appreciated!
> >>>
> >>> Thanks and best regards
> >>> Frieder
> >>>
> >>> [1]:
> >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fco
> >>> mm
> >>> u
> >>>
> >>
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >>>
> >>
> Fluctuations%2Fm-p%2F1242467%23M170563&data=04%7C01%7Cqiang
> >>>
> >>
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >>>
> >>
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >>>
> >>
> WwiLCJXVCI6Mn0%3D%7C1000&sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> >>> YSxakXwZtxde8%3D&reserved=0
Powered by blists - more mailing lists