[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240515.183719.2094245718391165470.fujita.tomonori@gmail.com>
Date: Wed, 15 May 2024 18:37:19 +0900 (JST)
From: FUJITA Tomonori <fujita.tomonori@...il.com>
To: jdamato@...tly.com
Cc: fujita.tomonori@...il.com, netdev@...r.kernel.org, andrew@...n.ch,
horms@...nel.org, kuba@...nel.org, jiri@...nulli.us, pabeni@...hat.com,
linux@...linux.org.uk, hfdevel@....net
Subject: Re: [PATCH net-next v6 4/6] net: tn40xx: add basic Rx handling
Hi,
Thanks for reviewing the patch!
On Mon, 13 May 2024 11:37:27 -0700
Joe Damato <jdamato@...tly.com> wrote:
> On Sun, May 12, 2024 at 05:56:09PM +0900, FUJITA Tomonori wrote:
>> This patch adds basic Rx handling. The Rx logic uses three major data
>> structures; two ring buffers with NIC and one database. One ring
>> buffer is used to send information to NIC about memory to be stored
>> packets to be received. The other is used to get information from NIC
>> about received packets. The database is used to keep the information
>> about DMA mapping. After a packet arrived, the db is used to pass the
>> packet to the network stack.
>
> I left one comment below, but had a higher level question unrelated to my
> comment below:
>
> Have you considered using the page pool for allocating/recycling RX
> buffers? It might simplify your code significantly and reduce the amount of
> code that needs to be maintained. Several drivers are using the page pool
> already, so there are many examples.
>
> My apologies if you answered this in an earlier version and I just missed
> it.
The page pool hasn't been mentioned before. I'll try it.
>> +static int tn40_poll(struct napi_struct *napi, int budget)
>> +{
>> + struct tn40_priv *priv = container_of(napi, struct tn40_priv, napi);
>> + int work_done;
>> +
>> + tn40_tx_cleanup(priv);
>> +
>> + if (!budget)
>> + return 0;
>> +
>> + work_done = tn40_rx_receive(priv, &priv->rxd_fifo0, budget);
>> + if (work_done == budget)
>> + return budget;
>> +
>> + napi_complete_done(napi, work_done);
>
> I believe the return value of napi_complete_done should be checked here and
> only if it returns true, should IRQs be re-enabled.
>
> For example:
>
> if (napi_complete_done(napi, work_done))
> tn40_enable_interrupts(priv);
>
>> + tn40_enable_interrupts(priv);
>> + return work_done;
>> +}
Ah, I messed up when I changed the poller to handle zero budget. I'll
fix in v7.
Thanks!
Powered by blists - more mailing lists