[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7e4a416-9da4-7ff2-2223-589fd66f557d@intel.com>
Date: Thu, 6 Jul 2023 13:39:07 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Souradeep Chakrabarti <schakrabarti@...rosoft.com>, souradeep chakrabarti
<schakrabarti@...ux.microsoft.com>
CC: KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
"davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, Long Li <longli@...rosoft.com>, Ajay
Sharma <sharmaajay@...rosoft.com>, "leon@...nel.org" <leon@...nel.org>,
"cai.huoqing@...ux.dev" <cai.huoqing@...ux.dev>,
"ssengar@...ux.microsoft.com" <ssengar@...ux.microsoft.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "linux-hyperv@...r.kernel.org"
<linux-hyperv@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>, "stable@...r.kernel.org"
<stable@...r.kernel.org>
Subject: Re: [EXTERNAL] Re: [PATCH V4 net] net: mana: Fix MANA VF unload when
host is unresponsive
From: Souradeep Chakrabarti <schakrabarti@...rosoft.com>
Date: Thu, 6 Jul 2023 10:41:03 +0000
>
>
>> -----Original Message-----
>> From: Alexander Lobakin <aleksander.lobakin@...el.com>
>> Sent: Wednesday, July 5, 2023 8:06 PM
[...]
>>>> 120 seconds by 2 msec step is 60000 iterations, by 1 msec is 120000
>>>> iterations. I know usleep_range() often is much less precise, but still.
>>>> Do you really need that much time? Has this been measured during the
>>>> tests that it can take up to 120 seconds or is it just some random
>>>> value that "should be enough"?
>>>> If you really need 120 seconds, I'd suggest using a timer / delayed
>>>> work instead of wasting resources.
>>> Here the intent is not waiting for 120 seconds, rather than avoid
>>> continue checking the pending_sends of each tx queues for an indefinite time,
>> before freeing sk_buffs.
>>> The pending_sends can only get decreased for a tx queue, if
>>> mana_poll_tx_cq() gets called for a completion notification and increased by
>> xmit.
>>>
>>> In this particular bug, apc->port_is_up is not set to false, causing
>>> xmit to keep increasing the pending_sends for the queue and
>>> mana_poll_tx_cq() not getting called for the queue.
>>>
>>> If we see the comment in the function mana_dealloc_queues(), it mentions it :
>>>
>>> 2346 /* No packet can be transmitted now since apc->port_is_up is false.
>>> 2347 * There is still a tiny chance that mana_poll_tx_cq() can re-enable
>>> 2348 * a txq because it may not timely see apc->port_is_up being cleared
>>> 2349 * to false, but it doesn't matter since mana_start_xmit() drops any
>>> 2350 * new packets due to apc->port_is_up being false.
>>>
>>> The value 120 seconds has been decided here based on maximum number of
>>> queues
>>
>> This is quite opposite to what you're saying above. How should I connect these
>> two:
>>
>> Here the intent is not waiting for 120 seconds
>>
>> +
>>
>> The value 120 seconds has been decided here based on maximum number of
>> queues
>>
>> ?
>> Can cleaning the Tx queues really last for 120 seconds?
>> My understanding is that timeouts need to be sensible and not go to the outer
>> space. What is the medium value you got during the tests?
>>
> For each queue each takes few milli second, in a normal condition. So
> based on maximum number of allowed queues for our h/w it won't
> go beyond a sec.
> The 120s only happens rarely during some NIC HW issue -unexpected.
> So this timeout will only trigger in a very rare scenario.
So set the timeout to 2 seconds if it makes no difference?
>>> are allowed in this specific hardware, it is a safe assumption.
>>>>
>>>>>
>>>>> + for (i = 0; i < apc->num_queues; i++) {
>>>>> + txq = &apc->tx_qp[i].txq;
>>>>> + cq = &apc->tx_qp[i].tx_cq;
[...]
Thanks,
Olek
Powered by blists - more mailing lists