[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d7f6fe0-9205-4eaf-bc43-dc36b14925a4@davidwei.uk>
Date: Mon, 16 Dec 2024 13:41:46 -0800
From: David Wei <dw@...idwei.uk>
To: Michael Chan <michael.chan@...adcom.com>,
Somnath Kotur <somnath.kotur@...adcom.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>, netdev@...r.kernel.org,
Andy Gospodarek <andrew.gospodarek@...adcom.com>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH net v3 3/3] bnxt_en: handle tpa_info in queue API
implementation
On 2024-12-11 09:11, Michael Chan wrote:
> On Wed, Dec 11, 2024 at 8:10 AM David Wei <dw@...idwei.uk> wrote:
>>
>> On 2024-12-11 04:32, Yunsheng Lin wrote:
>>> On 2024/12/11 2:14, David Wei wrote:
>>>> On 2024-12-10 04:25, Yunsheng Lin wrote:
>>>>> On 2024/12/4 12:10, David Wei wrote:
>>>>>
>>>>>> bnxt_copy_rx_ring(bp, rxr, clone);
>>>>>> @@ -15563,6 +15580,8 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
>>>>>> bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
>>>>>> rxr->rx_next_cons = 0;
>>>>>> page_pool_disable_direct_recycling(rxr->page_pool);
>>>>>> + if (bnxt_separate_head_pool())
>>>>>> + page_pool_disable_direct_recycling(rxr->head_pool);
>>>>>
>>>>> Hi, David
>>>>> As mentioned in [1], is the above page_pool_disable_direct_recycling()
>>>>> really needed?
>>>>>
>>>>> Is there any NAPI API called in the implementation of netdev_queue_mgmt_ops?
>>>>> It doesn't seem obvious there is any NAPI API like napi_enable() &
>>>>> ____napi_schedule() that is called in bnxt_queue_start()/bnxt_queue_stop()/
>>>>> bnxt_queue_mem_alloc()/bnxt_queue_mem_free() through code reading.
>>>>>
>>>>> 1. https://lore.kernel.org/all/c2b306af-4817-4169-814b-adbf25803919@huawei.com/
>>>>
>>>> Hi Yunsheng, there are explicitly no napi_enable/disable() calls in the
>>>> bnxt implementation of netdev_queue_mgmt_ops due to ... let's say HW/FW
>>>> quirks. I looked back at my discussions w/ Broadcom, and IIU/RC
>>>> bnxt_hwrm_vnic_update() will prevent any work from coming into the rxq
>>>> that I'm trying to stop. Calling napi_disable() has unintended side
>>>> effects on the Tx side.
>>>
>>> It seems that bnxt_hwrm_vnic_update() sends a VNIC_UPDATE cmd to disable
>>> a VNIC? and a napi_disable() is not needed?
>>
>> Correct.
>>
>>> Is it possible that there may
>>> be some pending NAPI work is still being processed after bnxt_hwrm_vnic_update()
>>> is called?
>>
>> Possibly, I don't know the details of how the HW works.
>>
>
> bnxt_hwrm_vnic_update() only stops the HW from receiving more packets.
> The completion may already have lots of RX entries and TPA entries.
> NAPI may be behind and it can take a while to process a batch of 64
> entries at a time to go through all the remaining entries.
>
>> At the time I just wanted something to work, and not having
>> napi_enable/disable() made it work. :) Looking back though it does seem
>> odd, so I'll try putting it back.
>
> Yeah, I think it makes sense to add napi_disable(). Thanks.
Michael, Som. I can't add napi_disable()/enable() because the NAPI
instance is shared between the Rx and Tx queues. If I disable a NAPI
instance, then it affects the corresponding Tx queue because it is not
quiesced. Only the Rx queue is quiesced indirectly by preventhing the HW
from receiving packets via the call to bnxt_hwrm_vnic_update().
The other implementation of queue API (gve) will quiesce all queues for
an individual queue stop/start operation. To call
napi_disable()/enable() I believe we will need the same thing for bnxt.
Powered by blists - more mailing lists