[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1918420534.1246603.1746786066099.JavaMail.zimbra@couthit.local>
Date: Fri, 9 May 2025 15:51:06 +0530 (IST)
From: Parvathi Pudi <parvathi@...thit.com>
To: pabeni <pabeni@...hat.com>
Cc: danishanwar <danishanwar@...com>, rogerq <rogerq@...nel.org>,
andrew+netdev <andrew+netdev@...n.ch>, davem <davem@...emloft.net>,
edumazet <edumazet@...gle.com>, kuba <kuba@...nel.org>,
robh <robh@...nel.org>, krzk+dt <krzk+dt@...nel.org>,
conor+dt <conor+dt@...nel.org>, ssantosh <ssantosh@...nel.org>,
tony <tony@...mide.com>, richardcochran <richardcochran@...il.com>,
glaroque <glaroque@...libre.com>, schnelle <schnelle@...ux.ibm.com>,
m-karicheri2 <m-karicheri2@...com>, s hauer <s.hauer@...gutronix.de>,
rdunlap <rdunlap@...radead.org>, diogo ivo <diogo.ivo@...mens.com>,
basharath <basharath@...thit.com>, horms <horms@...nel.org>,
jacob e keller <jacob.e.keller@...el.com>,
m-malladi <m-malladi@...com>,
javier carrasco cruz <javier.carrasco.cruz@...il.com>,
afd <afd@...com>, s-anna <s-anna@...com>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
netdev <netdev@...r.kernel.org>,
devicetree <devicetree@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
pratheesh <pratheesh@...com>, Prajith Jayarajan <prajith@...com>,
Vignesh Raghavendra <vigneshr@...com>, praneeth <praneeth@...com>,
srk <srk@...com>, rogerq <rogerq@...com>,
krishna <krishna@...thit.com>, pmohan <pmohan@...thit.com>,
mohan <mohan@...thit.com>, parvathi <parvathi@...thit.com>
Subject: Re: [PATCH net-next v7 04/11] net: ti: prueth: Adds link detection,
RX and TX support.
Hi,
> On 5/3/25 3:11 PM, Parvathi Pudi wrote:
>> +/**
>> + * icssm_emac_rx_thread - EMAC Rx interrupt thread handler
>> + * @irq: interrupt number
>> + * @dev_id: pointer to net_device
>> + *
>> + * EMAC Rx Interrupt thread handler - function to process the rx frames in a
>> + * irq thread function. There is only limited buffer at the ingress to
>> + * queue the frames. As the frames are to be emptied as quickly as
>> + * possible to avoid overflow, irq thread is necessary. Current implementation
>> + * based on NAPI poll results in packet loss due to overflow at
>> + * the ingress queues. Industrial use case requires loss free packet
>> + * processing. Tests shows that with threaded irq based processing,
>> + * no overflow happens when receiving at ~92Mbps for MTU sized frames and thus
>> + * meet the requirement for industrial use case.
>
> The above statement is highly suspicious. On an non idle system the
> threaded irq can be delayed for an unbound amount of time. On an idle
> system napi_poll should be invoked with a latency comparable - if not
> less - to the threaded irq. Possibly you tripped on some H/W induced
> latency to re-program the ISR?
>
> In any case I think we need a better argumented statement to
> intentionally avoid NAPI.
>
> Cheers,
>
> Paolo
The above comment was from the developer to highlight that there is an improvement in
performance with IRQ compared to NAPI. The improvement in performance was observed due to
the limited PRU buffer pool (holds only 3 MTU packets). We need to service the queue as
soon as a packet is written to prevent overflow. To achieve this, IRQs with highest
priority is used. We will clean up the comments in the next version.
Thanks and Regards,
Parvathi.
Powered by blists - more mailing lists