[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221110162148.3533816-1-alexandr.lobakin@intel.com>
Date: Thu, 10 Nov 2022 17:21:48 +0100
From: Alexander Lobakin <alexandr.lobakin@...el.com>
To: Andrew Lunn <andrew@...n.ch>
Cc: Alexander Lobakin <alexandr.lobakin@...el.com>,
Horatiu Vultur <horatiu.vultur@...rochip.com>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
bpf@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
linux@...linux.org.uk, UNGLinuxDriver@...rochip.com
Subject: Re: [PATCH net-next v3 0/4] net: lan966x: Add xdp support
From: Andrew Lunn <andrew@...n.ch>
Date: Thu, 10 Nov 2022 14:57:35 +0100
> > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> > I'm not a fan of such, and this series proves once again XDP fits
> > any hardware ^.^
>
> The Freescale FEC recently gained XDP support. Many variants of it are
> Fast Ethernet only.
>
> What i found most interesting about that patchset was that the use of
> the page_ppol API made the driver significantly faster for the general
> case as well as XDP.
The driver didn't have any page recycling or page splitting logics,
while Page Pool recycles even pages from skbs if
skb_mark_for_recycle() is used, which is the case here. So it
significantly reduced the number of new page allocations for Rx, if
there still are any at all.
Plus, Page Pool allocates pages by bulks (of 16 IIRC), not one by
one, that reduces CPU overhead as well.
>
> Andrew
Thanks,
Olek
Powered by blists - more mailing lists