[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PH3PPF67C992ECC632806C8E123CB24D1489101A@PH3PPF67C992ECC.namprd11.prod.outlook.com>
Date: Wed, 3 Sep 2025 07:24:58 +0000
From: "Singh, PriyaX" <priyax.singh@...el.com>
To: "intel-wired-lan-bounces@...osl.org" <intel-wired-lan-bounces@...osl.org>
CC: "Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>, "Keller, Jacob E"
<jacob.e.keller@...el.com>, "Zaremba, Larysa" <larysa.zaremba@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Kitszel, Przemyslaw"
<przemyslaw.kitszel@...el.com>, Paul Menzel <pmenzel@...gen.mpg.de>, "Nguyen,
Anthony L" <anthony.l.nguyen@...el.com>, "Kubiak, Michal"
<michal.kubiak@...el.com>, "Buvaneswaran, Sujai"
<sujai.buvaneswaran@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH iwl-next v2 3/3] ice: switch to Page
Pool
> This patch completes the transition of the ice driver to use the Page
> Pool and libeth APIs, following the same direction as commit
> 5fa4caff59f2
> ("iavf: switch to Page Pool"). With the legacy page splitting and
> recycling logic already removed, the driver is now in a clean state to
> adopt the modern memory model.
>
> The Page Pool integration simplifies buffer management by offloading
> DMA mapping and recycling to the core infrastructure. This eliminates
> the need for driver-specific handling of headroom, buffer sizing, and
> page order. The libeth helper is used for CPU-side processing, while
> DMA-for-device is handled by the Page Pool core.
>
> Additionally, this patch extends the conversion to cover XDP support.
> The driver now uses libeth_xdp helpers for Rx buffer processing, and
> optimizes XDP_TX by skipping per-frame DMA mapping. Instead, all
> buffers are mapped as bi-directional up front, leveraging Page Pool's
> lifecycle management. This significantly reduces overhead in virtualized
> environments.
>
> Performance observations:
> - In typical scenarios (netperf, XDP_PASS, XDP_DROP), performance remains
> on par with the previous implementation.
> - In XDP_TX mode:
> * With IOMMU enabled, performance improves dramatically - over 5x
> increase - due to reduced DMA mapping overhead and better memory
> reuse.
> * With IOMMU disabled, performance remains comparable to the
> previous
> implementation, with no significant changes observed.
>
> This change is also a step toward a more modular and unified XDP
> implementation across Intel Ethernet drivers, aligning with ongoing
> efforts to consolidate and streamline feature support.
>
> Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> Suggested-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Signed-off-by: Michal Kubiak <michal.kubiak@...el.com>
> ---
> drivers/net/ethernet/intel/Kconfig | 1 +
> drivers/net/ethernet/intel/ice/ice_base.c | 85 ++--
> drivers/net/ethernet/intel/ice/ice_ethtool.c | 17 +-
> drivers/net/ethernet/intel/ice/ice_lib.c | 1 -
> drivers/net/ethernet/intel/ice/ice_main.c | 10 +-
> drivers/net/ethernet/intel/ice/ice_txrx.c | 443 +++---------------
> drivers/net/ethernet/intel/ice/ice_txrx.h | 33 +-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 65 ++-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 9 -
> drivers/net/ethernet/intel/ice/ice_xsk.c | 76 +--
> drivers/net/ethernet/intel/ice/ice_xsk.h | 6 +-
> 11 files changed, 200 insertions(+), 546 deletions(-)
Tested-by: Priya Singh <priyax.singh@...el.com>
Powered by blists - more mailing lists