[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220502170527.GA137942@francesco-nb.int.toradex.com>
Date: Mon, 2 May 2022 19:05:27 +0200
From: Francesco Dolcini <francesco.dolcini@...adex.com>
To: Andrew Lunn <andrew@...n.ch>, netdev@...r.kernel.org,
Andy Duan <fugang.duan@....com>
Cc: Francesco Dolcini <francesco.dolcini@...adex.com>,
Joakim Zhang <qiangqing.zhang@....com>,
Heiner Kallweit <hkallweit1@...il.com>,
Russell King <linux@...linux.org.uk>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Fabio Estevam <festevam@...il.com>,
Tim Harvey <tharvey@...eworks.com>,
Chris Healy <cphealy@...il.com>
Subject: Re: FEC MDIO read timeout on linkup
Hello Andrew (and all),
I have some idea that this issue might be related to a recent change
you did.
On Fri, Apr 22, 2022 at 05:26:12PM +0200, Francesco Dolcini wrote:
> Hello all,
> I have been recently trying to debug an issue with FEC driver erroring
> a MDIO read timeout during linkup [0]. At the beginning I was working
> with an old 5.4 kernel, but today I tried with the current master
> (5.18.0-rc3-00080-gd569e86915b7) and the issue is just there.
>
...
>
> Could this be some sort of race condition? Any suggestion for debugging
> this?
>
> Here the stack trace:
>
> [ 146.195696] fec 2188000.ethernet eth0: MDIO read timeout
> [ 146.201779] ------------[ cut here ]------------
> [ 146.206671] WARNING: CPU: 0 PID: 571 at drivers/net/phy/phy.c:942 phy_error+0x24/0x6c
> [ 146.214744] Modules linked in: bnep imx_vdoa imx_sdma evbug
> [ 146.220640] CPU: 0 PID: 571 Comm: irq/128-2188000 Not tainted 5.18.0-rc3-00080-gd569e86915b7 #9
> [ 146.229563] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
> [ 146.236257] unwind_backtrace from show_stack+0x10/0x14
> [ 146.241640] show_stack from dump_stack_lvl+0x58/0x70
> [ 146.246841] dump_stack_lvl from __warn+0xb4/0x24c
> [ 146.251772] __warn from warn_slowpath_fmt+0x5c/0xd4
> [ 146.256873] warn_slowpath_fmt from phy_error+0x24/0x6c
> [ 146.262249] phy_error from kszphy_handle_interrupt+0x40/0x48
> [ 146.268159] kszphy_handle_interrupt from irq_thread_fn+0x1c/0x78
> [ 146.274417] irq_thread_fn from irq_thread+0xf0/0x1dc
> [ 146.279605] irq_thread from kthread+0xe4/0x104
> [ 146.284267] kthread from ret_from_fork+0x14/0x28
> [ 146.289164] Exception stack(0xe6fa1fb0 to 0xe6fa1ff8)
> [ 146.294448] 1fa0: 00000000 00000000 00000000 00000000
> [ 146.302842] 1fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [ 146.311281] 1fe0: 00000000 00000000 00000000 00000000 00000013 00000000
> [ 146.318262] irq event stamp: 12325
> [ 146.321780] hardirqs last enabled at (12333): [<c01984c4>] __up_console_sem+0x50/0x60
> [ 146.330013] hardirqs last disabled at (12342): [<c01984b0>] __up_console_sem+0x3c/0x60
> [ 146.338259] softirqs last enabled at (12324): [<c01017f0>] __do_softirq+0x2c0/0x624
> [ 146.346311] softirqs last disabled at (12319): [<c01300ac>] __irq_exit_rcu+0x138/0x178
> [ 146.354447] ---[ end trace 0000000000000000 ]---
Could it be that the issue is writing the MSCR in fec_restart(),
`writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED)`?
I do see the issue on link up/down event, when this function is actually
called.
>From what I can understand from the previous history:
1e6114f51f9d (net: fec: fix MDIO probing for some FEC hardware blocks, 2020-10-28)
f166f890c8f0 (net: ethernet: fec: Replace interrupt driven MDIO with polled IO, 2020-05-02)
writing to this register could trigger a FEC_ENET_MII interrupt actually
creating a race condition with fec_enet_mdio_read() that is called on
link change also.
Do this explanation makes any sense to you?
I'm testing at the moment a patch that should solve that.
Francesco
Powered by blists - more mailing lists