[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <840fd286-779e-4130-b544-913116c97a29@lunn.ch>
Date: Fri, 23 Jan 2026 00:04:55 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Paolo Valerio <pvalerio@...hat.com>
Cc: netdev@...r.kernel.org, Nicolas Ferre <nicolas.ferre@...rochip.com>,
Claudiu Beznea <claudiu.beznea@...on.dev>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Théo Lebrun <theo.lebrun@...tlin.com>
Subject: Re: [PATCH net-next 3/8] cadence: macb: Add page pool support handle
multi-descriptor frame rx
On Thu, Jan 22, 2026 at 11:24:05PM +0100, Paolo Valerio wrote:
> On 16 Jan 2026 at 06:16:16 PM, Andrew Lunn <andrew@...n.ch> wrote:
>
> > On Thu, Jan 15, 2026 at 11:25:26PM +0100, Paolo Valerio wrote:
> >> Use the page pool allocator for the data buffers and enable skb recycling
> >> support, instead of relying on netdev_alloc_skb allocating the entire skb
> >> during the refill.
> >
> > Do you have any benchmark numbers for this change? Often swapping to
> > page pool improves the performance of the driver, and i use it as a
> > selling point for doing the conversion, independent of XDP.
> >
>
> I finally got the chance to get my hands on the board.
>
> On the rpi5 I simply run xdp-bench in skb-mode to drop and collect the
> stats.
>
> Page size is 4k and stats include the driver consuming a full page
> setting mtu such that rx_buffer_size + overhead exceed half page and the
> other way around for 2 fragments.
>
> | 64 | 128 |
> baseline | 533,158 | 531,618 |
> pp page | 530,929 | 529,682 |
> pp 2 frags | 530,781 | 529,116 |
I was more interested in plain networking, not XDP. Does it perform
better with page pool? You at least need to show it is not worse, you
need to avoid performance regressions.
Andrew
Powered by blists - more mailing lists