[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250704161859.871152-1-michal.kubiak@intel.com>
Date: Fri, 4 Jul 2025 18:18:56 +0200
From: Michal Kubiak <michal.kubiak@...el.com>
To: intel-wired-lan@...ts.osuosl.org
Cc: maciej.fijalkowski@...el.com,
aleksander.lobakin@...el.com,
larysa.zaremba@...el.com,
netdev@...r.kernel.org,
przemyslaw.kitszel@...el.com,
anthony.l.nguyen@...el.com,
Michal Kubiak <michal.kubiak@...el.com>
Subject: [PATCH iwl-next 0/3] ice: convert Rx path to Page Pool
This series modernizes the Rx path in the ice driver by removing legacy
code and switching to the Page Pool API. The changes follow the same
direction as previously done for the iavf driver, and aim to simplify
buffer management, improve maintainability, and prepare for future
infrastructure reuse.
An important motivation for this work was addressing reports of poor
performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
incurred significant overhead due to per-frame DMA mapping, which
limited throughput in virtualized environments. This series eliminates
those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
The first patch removes the legacy Rx path, which relied on manual skb
allocation and header copying. This path has become obsolete due to the
availability of build_skb() and the increasing complexity of supporting
features like XDP and multi-buffer.
The second patch drops the page splitting and recycling logic. While
once used to optimize memory usage, this logic introduced significant
complexity and hotpath overhead. Removing it simplifies the Rx flow and
sets the stage for Page Pool adoption.
The final patch switches the driver to use the Page Pool and libeth
APIs. It also updates the XDP implementation to use libeth_xdp helpers
and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
a significant performance improvement in virtualized environments with
IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
performance remains on par with the previous implementation.
This conversion also aligns with the broader effort to modularize and
unify XDP support across Intel Ethernet drivers.
Tested on various workloads including netperf and XDP modes (PASS, DROP,
TX) with and without IOMMU. No regressions observed.
Last but not least, it is suspected that this series may also help
mitigate the memory consumption issues recently reported in the driver.
For further details, see:
https://lore.kernel.org/intel-wired-lan/CAK8fFZ4hY6GUJNENz3wY9jaYLZXGfpr7dnZxzGMYoE44caRbgw@mail.gmail.com/
Thanks,
Michal
Michal Kubiak (3):
ice: remove legacy Rx and construct SKB
ice: drop page splitting and recycling
ice: switch to Page Pool
drivers/net/ethernet/intel/Kconfig | 1 +
drivers/net/ethernet/intel/ice/ice.h | 3 +-
drivers/net/ethernet/intel/ice/ice_base.c | 122 ++--
drivers/net/ethernet/intel/ice/ice_ethtool.c | 22 +-
drivers/net/ethernet/intel/ice/ice_lib.c | 1 -
drivers/net/ethernet/intel/ice/ice_main.c | 21 +-
drivers/net/ethernet/intel/ice/ice_txrx.c | 645 +++---------------
drivers/net/ethernet/intel/ice/ice_txrx.h | 37 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 65 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 7 +-
drivers/net/ethernet/intel/ice/ice_virtchnl.c | 5 +-
drivers/net/ethernet/intel/ice/ice_xsk.c | 120 +---
drivers/net/ethernet/intel/ice/ice_xsk.h | 6 +-
13 files changed, 205 insertions(+), 850 deletions(-)
--
2.45.2
Powered by blists - more mailing lists