lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB8PR04MB6795BEDCA2995C1E88E2B5B7E62B9@DB8PR04MB6795.eurprd04.prod.outlook.com>
Date:   Wed, 19 May 2021 08:40:49 +0000
From:   Joakim Zhang <qiangqing.zhang@....com>
To:     Frieder Schrempf <frieder.schrempf@...tron.de>,
        Dave Taht <dave.taht@...il.com>
CC:     dl-linux-imx <linux-imx@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations


Hi Frieder,

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> Sent: 2021年5月19日 16:10
> To: Joakim Zhang <qiangqing.zhang@....com>; Dave Taht
> <dave.taht@...il.com>
> Cc: dl-linux-imx <linux-imx@....com>; netdev@...r.kernel.org;
> linux-arm-kernel@...ts.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> Hi Joakim,
> 
> On 19.05.21 09:49, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> >> -----Original Message-----
> >> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> >> Sent: 2021年5月18日 20:55
> >> To: Joakim Zhang <qiangqing.zhang@....com>; Dave Taht
> >> <dave.taht@...il.com>
> >> Cc: dl-linux-imx <linux-imx@....com>; netdev@...r.kernel.org;
> >> linux-arm-kernel@...ts.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >>
> >>
> >> On 18.05.21 14:35, Joakim Zhang wrote:
> >>>
> >>> Hi Dave,
> >>>
> >>>> -----Original Message-----
> >>>> From: Dave Taht <dave.taht@...il.com>
> >>>> Sent: 2021年5月17日 20:48
> >>>> To: Joakim Zhang <qiangqing.zhang@....com>
> >>>> Cc: Frieder Schrempf <frieder.schrempf@...tron.de>; dl-linux-imx
> >>>> <linux-imx@....com>; netdev@...r.kernel.org;
> >>>> linux-arm-kernel@...ts.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >>>> <qiangqing.zhang@....com>
> >>>> wrote:
> >>>>>
> >>>>>
> >>>>> Hi Frieder,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Frieder Schrempf <frieder.schrempf@...tron.de>
> >>>>>> Sent: 2021年5月17日 15:17
> >>>>>> To: Joakim Zhang <qiangqing.zhang@....com>; dl-linux-imx
> >>>>>> <linux-imx@....com>; netdev@...r.kernel.org;
> >>>>>> linux-arm-kernel@...ts.infradead.org
> >>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>> Hi Joakim,
> >>>>>>
> >>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>>>
> >>>>>>> Hi Frieder,
> >>>>>>>
> >>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
> >>>>>>> reproduce on
> >>>>>> L5.10, and can't reproduce on L5.4.
> >>>>>>> According to your description, you can reproduce this issue both
> >>>>>>> L5.4 and
> >>>>>> L5.10? So I need confirm with you.
> >>>>>>
> >>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>>>> 5.10 but both kernels were official mainline kernels and **not**
> >>>>>> from the linux-imx downstream tree.
> >>>>> Ok.
> >>>>>
> >>>>>> Maybe there is some problem in the mainline tree and it got
> >>>>>> included in the NXP release kernel starting from L5.10?
> >>>>> No, this much looks like a known issue, it should always exist
> >>>>> after adding
> >>>> AVB support in mainline.
> >>>>>
> >>>>> ENET IP is not a _real_ multiple queues per my understanding,
> >>>>> queue
> >>>>> 0 is for
> >>>> best effort. And the queue 1&2 is for AVB stream whose default
> >>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
> >>>> 500Mbps
> >> for 1Gbps).
> >>>> When transmitting packets, net core will select queues randomly,
> >>>> which caused the tx bandwidth fluctuations. So you can change to
> >>>> use single queue if you care more about tx bandwidth. Or you can
> >>>> refer to NXP internal implementation.
> >>>>> e.g.
> >>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>> @@ -916,8 +916,8 @@
> >>>>>                                          <&clk
> >>>> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>>>                                 clock-names = "ipg", "ahb",
> "ptp",
> >>>>>
> "enet_clk_ref",
> >>>> "enet_out";
> >>>>> -                               fsl,num-tx-queues = <3>;
> >>>>> -                               fsl,num-rx-queues = <3>;
> >>>>> +                               fsl,num-tx-queues = <1>;
> >>>>> +                               fsl,num-rx-queues = <1>;
> >>>>>                                 status = "disabled";
> >>>>>                         };
> >>>>>                 };
> >>>>>
> >>>>> I hope this can help you :)
> >>>>
> >>>> Patching out the queues is probably not the right thing.
> >>>>
> >>>> for starters... Is there BQL support in this driver? It would be
> >>>> helpful to have on all queues.
> >>> There is no BQL support in this driver, and BQL may improve
> >>> throughput
> >> further, but should not be the root cause of this reported issue.
> >>>
> >>>> Also if there was a way to present it as two interfaces, rather
> >>>> than one, that would allow for a specific avb device to be presented.
> >>>>
> >>>> Or:
> >>>>
> >>>> Is there a standard means of signalling down the stack via the IP
> >>>> layer (a
> >> dscp?
> >>>> a setsockopt?) that the AVB queue is requested?
> >>>>
> >>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue
> >>> 1&2
> >> based on VLAN-ID.
> >>
> >> I had to look up what AVB even means, but from my current
> >> understanding it doesn't seem right that for non-AVB packets the
> >> driver picks any of the three queues in a random fashion while at the
> >> same time knowing that queue 1 and 2 have a 50% limitation on the
> >> bandwidth. Shouldn't there be some way to prefer queue 0 without
> >> needing the user to set it up or even arbitrarily limiting the number of
> queues as proposed above?
> >
> > Yes, I think we can. I look into NXP local implementation, there is a
> ndo_select_queue callback.
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
> >
> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
> %
> >
> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
> 04
> > %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9
> d8a28%7
> >
> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
> 7CUnkno
> >
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> WwiL
> >
> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
> RRvbwhuLy9
> > g30bn3ok%3D&amp;reserved=0
> > This is the version for L5.4 kernel.
> 
> Yes, this looks like it could solve the issue. Would you mind preparing a patch to
> upstream the change in [1]? I would be happy to test (at least the non-AVB
> case) and review.

Yes, I can have a try. I saw this patch has been staying in downstream tree for many years, and I don't know the history.
Anyway, I will try to upstream first to see if anyone has comments.

Best Regards,
Joakim Zhang
> Thanks
> Frieder
> 
> [1]
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.
> codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Fcommit%3Fid%3D8a7fe8f3
> 8b7e3b2f9a016dcf4b4e38bb941ac6df&amp;data=04%7C01%7Cqiangqing.zhan
> g%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9d8a28%7C686ea1d3bc2b4c6
> fa92cd99c5c301635%7C0%7C0%7C637570086193978287%7CUnknown%7CT
> WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJ
> XVCI6Mn0%3D%7C1000&amp;sdata=J%2FdfHlTY9qh%2BT8%2F%2B2%2Brzh9
> R%2BL9eG3yXbhFcHpSjs7Xk%3D&amp;reserved=0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ