[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6f36b6b6-8209-ed98-e7e1-3dac0a92f6cd@nvidia.com>
Date: Tue, 18 Jun 2019 20:44:10 +0100
From: Jon Hunter <jonathanh@...dia.com>
To: Jose Abreu <Jose.Abreu@...opsys.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: Joao Pinto <Joao.Pinto@...opsys.com>,
"David S . Miller" <davem@...emloft.net>,
Giuseppe Cavallaro <peppe.cavallaro@...com>,
Alexandre Torgue <alexandre.torgue@...com>,
Russell King <linux@...linux.org.uk>,
Andrew Lunn <andrew@...n.ch>,
Florian Fainelli <f.fainelli@...il.com>,
Heiner Kallweit <hkallweit1@...il.com>,
linux-tegra <linux-tegra@...r.kernel.org>
Subject: Re: [PATCH net-next 3/3] net: stmmac: Convert to phylink and remove
phylib logic
On 18/06/2019 16:20, Jon Hunter wrote:
>
> On 18/06/2019 11:18, Jon Hunter wrote:
>>
>> On 18/06/2019 10:46, Jose Abreu wrote:
>>> From: Jon Hunter <jonathanh@...dia.com>
>>>
>>>> I am not certain but I don't believe so. We are using a static IP address
>>>> and mounting the root file-system via NFS when we see this ...
>>>
>>> Can you please add a call to napi_synchronize() before every
>>> napi_disable() calls, like this:
>>>
>>> if (queue < rx_queues_cnt) {
>>> napi_synchronize(&ch->rx_napi);
>>> napi_disable(&ch->rx_napi);
>>> }
>>>
>>> if (queue < tx_queues_cnt) {
>>> napi_synchronize(&ch->tx_napi);
>>> napi_disable(&ch->tx_napi);
>>> }
>>>
>>> [ I can send you a patch if you prefer ]
>>
>> Yes I can try this and for completeness you mean ...
>>
>> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>> index 4ca46289a742..d4a12cb64d8e 100644
>> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>> @@ -146,10 +146,15 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
>> for (queue = 0; queue < maxq; queue++) {
>> struct stmmac_channel *ch = &priv->channel[queue];
>>
>> - if (queue < rx_queues_cnt)
>> + if (queue < rx_queues_cnt) {
>> + napi_synchronize(&ch->rx_napi);
>> napi_disable(&ch->rx_napi);
>> - if (queue < tx_queues_cnt)
>> + }
>> +
>> + if (queue < tx_queues_cnt) {
>> + napi_synchronize(&ch->tx_napi);
>> napi_disable(&ch->tx_napi);
>> + }
>> }
>> }
>
> So good news and bad news ...
>
> The good news is that the above change does fix the initial crash
> I am seeing. However, even with this change applied on top of
> -next, it is still dying somewhere else and so there appears to
> be a second issue.
Further testing has shown that actually this does NOT resolve the issue
and I am still seeing the crash. Sorry for the false-positive.
Jon
--
nvpublic
Powered by blists - more mailing lists