[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250811094307.4c2d42ae@kernel.org>
Date: Mon, 11 Aug 2025 09:43:07 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: "Pandey, Radhey Shyam" <radhey.shyam.pandey@....com>
Cc: "Gupta, Suraj" <Suraj.Gupta2@....com>, "andrew+netdev@...n.ch"
<andrew+netdev@...n.ch>, "davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>, "pabeni@...hat.com"
<pabeni@...hat.com>, "Simek, Michal" <michal.simek@....com>,
"sean.anderson@...ux.dev" <sean.anderson@...ux.dev>, "horms@...nel.org"
<horms@...nel.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "Katakam, Harini" <harini.katakam@....com>
Subject: Re: [PATCH net] net: xilinx: axienet: Increment Rx skb ring head
pointer after BD is successfully allocated in dmaengine flow
On Mon, 11 Aug 2025 15:55:02 +0000 Pandey, Radhey Shyam wrote:
> > That wasn't my reading, maybe I misinterpreted the code.
> >
> > From what I could tell the driver tries to give one new buffer for each buffer
> > completed. So it never tries to "catch up" on previously missed allocations. IOW say
> > we have a queue with 16 indexes, after 16 failures (which may be spread out over
> > time) the ring will be empty.
>
> Yes, IIRC there is 1:1 mapping for RX DMA callback and
> axienet_rx_submit_desc(). In case there are failure in
> axienet_rx_submit_desc() it is not able to reattempt
> in current implementation. Theoretically there could
> be other error in rx_submit_desc() (like dma_mapping/netdev
> allocation)
>
> One thought is to have some flag/index to tell that it should
> be reattempted in subsequent axienet_rx_submit_desc() ?
Yes, some kind of counter of buffer that need to be allocated.
The other problem to solve is when the buffers are completely
depleted there will be no callback so no opportunity to refill.
For drivers which refill from NAPI this is usually solved by
periodically scheduling NAPI.
Powered by blists - more mailing lists