[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<PAXPR04MB8510BB8856CB343601102A9988E9A@PAXPR04MB8510.eurprd04.prod.outlook.com>
Date: Thu, 16 Oct 2025 08:28:39 +0000
From: Wei Fang <wei.fang@....com>
To: Jianpeng Chang <jianpeng.chang.cn@...driver.com>
CC: "imx@...ts.linux.dev" <imx@...ts.linux.dev>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, Claudiu Manoil <claudiu.manoil@....com>,
Vladimir Oltean <vladimir.oltean@....com>, Clark Wang
<xiaoning.wang@....com>, "andrew+netdev@...n.ch" <andrew+netdev@...n.ch>,
"davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, Alexandru Marginean
<alexandru.marginean@....com>
Subject: RE: [v4 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
> After applying the workaround for err050089, the LS1028A platform
> experiences RCU stalls on RT kernel. This issue is caused by the
> recursive acquisition of the read lock enetc_mdio_lock. Here list some
> of the call stacks identified under the enetc_poll path that may lead to
> a deadlock:
>
> enetc_poll
> -> enetc_lock_mdio
> -> enetc_clean_rx_ring OR napi_complete_done
> -> napi_gro_receive
> -> enetc_start_xmit
> -> enetc_lock_mdio
> -> enetc_map_tx_buffs
> -> enetc_unlock_mdio
> -> enetc_unlock_mdio
>
> After enetc_poll acquires the read lock, a higher-priority writer attempts
> to acquire the lock, causing preemption. The writer detects that a
> read lock is already held and is scheduled out. However, readers under
> enetc_poll cannot acquire the read lock again because a writer is already
> waiting, leading to a thread hang.
>
> Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
> recursive lock acquisition.
>
> Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll
> cycle")
> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@...driver.com>
Thanks.
Acked-by: Wei Fang <wei.fang@....com>
Powered by blists - more mailing lists