lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 May 2021 05:47:55 -0700
From:   Dave Taht <dave.taht@...il.com>
To:     Joakim Zhang <qiangqing.zhang@....com>
Cc:     Frieder Schrempf <frieder.schrempf@...tron.de>,
        dl-linux-imx <linux-imx@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations

On Mon, May 17, 2021 at 3:25 AM Joakim Zhang <qiangqing.zhang@....com> wrote:
>
>
> Hi Frieder,
>
> > -----Original Message-----
> > From: Frieder Schrempf <frieder.schrempf@...tron.de>
> > Sent: 2021年5月17日 15:17
> > To: Joakim Zhang <qiangqing.zhang@....com>; dl-linux-imx
> > <linux-imx@....com>; netdev@...r.kernel.org;
> > linux-arm-kernel@...ts.infradead.org
> > Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >
> > Hi Joakim,
> >
> > On 13.05.21 14:36, Joakim Zhang wrote:
> > >
> > > Hi Frieder,
> > >
> > > For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on
> > L5.10, and can't reproduce on L5.4.
> > > According to your description, you can reproduce this issue both L5.4 and
> > L5.10? So I need confirm with you.
> >
> > Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both
> > kernels were official mainline kernels and **not** from the linux-imx
> > downstream tree.
> Ok.
>
> > Maybe there is some problem in the mainline tree and it got included in the
> > NXP release kernel starting from L5.10?
> No, this much looks like a known issue, it should always exist after adding AVB support in mainline.
>
> ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for best effort. And the queue 1&2 is for AVB stream whose default bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps). When transmitting packets, net core will select queues randomly, which caused the tx bandwidth fluctuations. So you can change to use single queue if you care more about tx bandwidth. Or you can refer to NXP internal implementation.
> e.g.
> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> @@ -916,8 +916,8 @@
>                                          <&clk IMX8MQ_CLK_ENET_PHY_REF>;
>                                 clock-names = "ipg", "ahb", "ptp",
>                                               "enet_clk_ref", "enet_out";
> -                               fsl,num-tx-queues = <3>;
> -                               fsl,num-rx-queues = <3>;
> +                               fsl,num-tx-queues = <1>;
> +                               fsl,num-rx-queues = <1>;
>                                 status = "disabled";
>                         };
>                 };
>
> I hope this can help you :)

Patching out the queues is probably not the right thing.

for starters... Is there BQL support in this driver? It would be
helpful to have on all queues.

Also if there was a way to present it as two interfaces, rather than
one, that would allow for a specific avb device to be
presented.

Or:

Is there a standard means of signalling down the stack via the IP
layer (a dscp? a setsockopt?) that the AVB queue is requested?



> Best Regards,
> Joakim Zhang
> > Best regards
> > Frieder
> >
> > >
> > > Best Regards,
> > > Joakim Zhang
> > >
> > >> -----Original Message-----
> > >> From: Joakim Zhang <qiangqing.zhang@....com>
> > >> Sent: 2021年5月12日 19:59
> > >> To: Frieder Schrempf <frieder.schrempf@...tron.de>; dl-linux-imx
> > >> <linux-imx@....com>; netdev@...r.kernel.org;
> > >> linux-arm-kernel@...ts.infradead.org
> > >> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > >>
> > >>
> > >> Hi Frieder,
> > >>
> > >> Sorry, I missed this mail before, I can reproduce this issue at my
> > >> side, I will try my best to look into this issue.
> > >>
> > >> Best Regards,
> > >> Joakim Zhang
> > >>
> > >>> -----Original Message-----
> > >>> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> > >>> Sent: 2021年5月6日 22:46
> > >>> To: dl-linux-imx <linux-imx@....com>; netdev@...r.kernel.org;
> > >>> linux-arm-kernel@...ts.infradead.org
> > >>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > >>>
> > >>> Hi,
> > >>>
> > >>> we observed some weird phenomenon with the Ethernet on our
> > >>> i.MX8M-Mini boards. It happens quite often that the measured
> > >>> bandwidth in TX direction drops from its expected/nominal value to
> > >>> something like 50% (for 100M) or ~67% (for 1G) connections.
> > >>>
> > >>> So far we reproduced this with two different hardware designs using
> > >>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
> > >>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> > >>>
> > >>> To measure the throughput we simply run iperf3 on the target (with a
> > >>> short p2p connection to the host PC) like this:
> > >>>
> > >>>   iperf3 -c 192.168.1.10 --bidir
> > >>>
> > >>> But even something more simple like this can be used to get the info
> > >>> (with 'nc -l -p 1122 > /dev/null' running on the host):
> > >>>
> > >>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> > >>>
> > >>> The results fluctuate between each test run and are sometimes 'good'
> > (e.g.
> > >>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for
> > >>> 100M
> > >> link).
> > >>> There is nothing else running on the system in parallel. Some more
> > >>> info is also available in this post: [1].
> > >>>
> > >>> If there's anyone around who has an idea on what might be the reason
> > >>> for this, please let me know!
> > >>> Or maybe someone would be willing to do a quick test on his own
> > hardware.
> > >>> That would also be highly appreciated!
> > >>>
> > >>> Thanks and best regards
> > >>> Frieder
> > >>>
> > >>> [1]:
> > >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fco
> > >>> mm
> > >>> u
> > >>>
> > >>
> > nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> > >>>
> > >>
> > Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> > >>>
> > >>
> > qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> > >>>
> > >>
> > 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> > >>>
> > >>
> > wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> > >>>
> > >>
> > WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> > >>> YSxakXwZtxde8%3D&amp;reserved=0



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ