[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231027225822.109d7583@xps-13>
Date: Fri, 27 Oct 2023 22:58:22 +0200
From: Miquel Raynal <miquel.raynal@...tlin.com>
To: Alexander Stein <alexander.stein@...tq-group.com>
Cc: Stephen Hemminger <stephen@...workplumber.org>, Andrew Lunn
<andrew@...n.ch>, Wei Fang <wei.fang@....com>, Shenwei Wang
<shenwei.wang@....com>, Clark Wang <xiaoning.wang@....com>, Russell King
<linux@...linux.org.uk>, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, linux-imx@....com,
netdev@...r.kernel.org, Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Alexandre Belloni <alexandre.belloni@...tlin.com>, Maxime Chevallier
<maxime.chevallier@...tlin.com>
Subject: Re: Ethernet issue on imx6
Hi Alexander,
> > The full kernel log is at the bottom of this e-mail:
> > https://lore.kernel.org/netdev/20231013102718.6b3a2dfe@xps-13/
> >
> > On the module I read on a white sticker:
> > TQMA6Q-AA
> > RK.0203
> > And on one side of the PCB:
> > TQMa6x.0201
> >
> > Do you know if this module has the hardware workaround discussed below?
> > (I don't have the schematics of the module)
>
> Yes, the TQMA6Q-AA RK.0203 has the ethernet hardware workaround implemented.
> So you should use the imx6q-tqma6a.dtsi (and eventuelly imx6qdl-tqma6a.dtsi)
> module device tree.
[...]
> > > > > Please note that there are two different module variants,
> > > > > imx6qdl-tqma6a.dtsi and imx6qdl-tqma6b.dtsi. They deal with i.MX6's
> > > > > ERR006687 differently. Package drop without any load somewhat
> > > > > indicates
> > > > > this issue.
> > > >
> > > > I've tried with and without the fsl,err006687-workaround-present DT
> > > > property. It gets successfully parsed an I see the lower idle state
> > > > being disabled under mach-imx. I've also tried just commenting out the
> > > > registration of the cpuidle driver, just to be sure. I saw no
> > > > difference.
> > >
> > > fsl,err006687-workaround-present requires a specific HW workaround, see
> > > [1]. So this is not applicable on every module.
> >
> > Based on the information provided above, do you think I can rely on the
> > HW workaround?
>
> The original u-boot auto-detects if the hardware workaround is present and
> default selects the appropriate device tree, either variant A or B, for MBa6x
> usage.
So apparently the hardware workaround would be on my module and is
already enabled by software. This would not be the real issue but just
making it worse. I think I diagnosed an issue related to the concurrent
use of DMA to read from the RAM with the IPU. Here is the link of the
new discussion:
https://lists.freedesktop.org/archives/dri-devel/2023-October/428251.html
> > I've tried disabling the registration of both the CPUidle and CPUfreq
> > drivers in the machine code and I see a real difference. The transfers
> > are still not perfect though, but I believe this is related to the ~1%
> > drop of the RGMII lines (timings are not perfect, but I could not
> > extend them more).
> >
> > I believe if the hardware workaround is not available on this module I
> > can still disable CPUidle and CPUfreq as a workaround of the
> > workaround...?
>
> It's hard say without knowing the cause of your problem. I didn't see any of
> these problems here.
>
> > > > By the way, we tried with a TQ eval board with this SoM and saw the same
> > > > issue (not me, I don't have this board in hands). Don't you experience
> > > > something similar? I went across a couple of people reporting similar
> > > > issues with these modules but none of them reported how they fixed it
> > > > (if they did). I tried two different images based on TQ's Github using
> > > > v4.14.69 and v5.10 kernels.
>
> You mentioned a couple of other people having similar problems with these
> modules. Can you tell me more about those? I'd like to gather more
> information. Thanks.
I searched again and found this one which really looked identical to my
initial issue:
https://community.nxp.com/t5/i-MX-Processors/Why-Imx6q-ethernet-is-too-slow/m-p/918992
Plus one other which I cannot find anymore.
>
> Best regards,
> Alexander
Thanks,
Miquèl
Powered by blists - more mailing lists