[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <IA1PR11MB62411902DEDC6ED33AC541F48B3AA@IA1PR11MB6241.namprd11.prod.outlook.com>
Date: Fri, 29 Aug 2025 06:12:34 +0000
From: "Rinitha, SX" <sx.rinitha@...el.com>
To: "Kubiak, Michal" <michal.kubiak@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>, "Lobakin,
Aleksander" <aleksander.lobakin@...el.com>, "Keller, Jacob E"
<jacob.e.keller@...el.com>, "Zaremba, Larysa" <larysa.zaremba@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Kitszel, Przemyslaw"
<przemyslaw.kitszel@...el.com>, "pmenzel@...gen.mpg.de"
<pmenzel@...gen.mpg.de>, "Nguyen, Anthony L" <anthony.l.nguyen@...el.com>,
"Kubiak, Michal" <michal.kubiak@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH iwl-next v2 3/3] ice: switch to Page
Pool
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@...osl.org> On Behalf Of Michal Kubiak
> Sent: 08 August 2025 21:27
> To: intel-wired-lan@...ts.osuosl.org
> Cc: Fijalkowski, Maciej <maciej.fijalkowski@...el.com>; Lobakin, Aleksander <aleksander.lobakin@...el.com>; Keller, Jacob E <jacob.e.keller@...el.com>; Zaremba, Larysa <larysa.zaremba@...el.com>; netdev@...r.kernel.org; Kitszel, Przemyslaw <przemyslaw.kitszel@...el.com>; pmenzel@...gen.mpg.de; Nguyen, Anthony L <anthony.l.nguyen@...el.com>; Kubiak, Michal <michal.kubiak@...el.com>
> Subject: [Intel-wired-lan] [PATCH iwl-next v2 3/3] ice: switch to Page Pool
>
> This patch completes the transition of the ice driver to use the Page Pool and libeth APIs, following the same direction as commit 5fa4caff59f2
> ("iavf: switch to Page Pool"). With the legacy page splitting and recycling logic already removed, the driver is now in a clean state to adopt the modern memory model.
>
> The Page Pool integration simplifies buffer management by offloading DMA mapping and recycling to the core infrastructure. This eliminates the need for driver-specific handling of headroom, buffer sizing, and page order. The libeth helper is used for CPU-side processing, while DMA-for-device is handled by the Page Pool core.
>
> Additionally, this patch extends the conversion to cover XDP support.
> The driver now uses libeth_xdp helpers for Rx buffer processing, and optimizes XDP_TX by skipping per-frame DMA mapping. Instead, all buffers are mapped as bi-directional up front, leveraging Page Pool's lifecycle management. This significantly reduces overhead in virtualized environments.
>
> Performance observations:
> - In typical scenarios (netperf, XDP_PASS, XDP_DROP), performance remains
> on par with the previous implementation.
> - In XDP_TX mode:
> * With IOMMU enabled, performance improves dramatically - over 5x
> increase - due to reduced DMA mapping overhead and better memory reuse.
> * With IOMMU disabled, performance remains comparable to the previous
> implementation, with no significant changes observed.
>
> This change is also a step toward a more modular and unified XDP implementation across Intel Ethernet drivers, aligning with ongoing efforts to consolidate and streamline feature support.
>
> Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> Suggested-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Signed-off-by: Michal Kubiak <michal.kubiak@...el.com>
> ---
> drivers/net/ethernet/intel/Kconfig | 1 +
> drivers/net/ethernet/intel/ice/ice_base.c | 85 ++--
> drivers/net/ethernet/intel/ice/ice_ethtool.c | 17 +-
> drivers/net/ethernet/intel/ice/ice_lib.c | 1 -
> drivers/net/ethernet/intel/ice/ice_main.c | 10 +-
> drivers/net/ethernet/intel/ice/ice_txrx.c | 443 +++---------------
> drivers/net/ethernet/intel/ice/ice_txrx.h | 33 +-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 65 ++-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 9 -
> drivers/net/ethernet/intel/ice/ice_xsk.c | 76 +--
> drivers/net/ethernet/intel/ice/ice_xsk.h | 6 +-
> 11 files changed, 200 insertions(+), 546 deletions(-)
>
Tested-by: Rinitha S <sx.rinitha@...el.com> (A Contingent worker at Intel)
Powered by blists - more mailing lists