[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250925092253.1306476-1-michal.kubiak@intel.com>
Date: Thu, 25 Sep 2025 11:22:50 +0200
From: Michal Kubiak <michal.kubiak@...el.com>
To: intel-wired-lan@...ts.osuosl.org
Cc: maciej.fijalkowski@...el.com,
aleksander.lobakin@...el.com,
jacob.e.keller@...el.com,
larysa.zaremba@...el.com,
netdev@...r.kernel.org,
przemyslaw.kitszel@...el.com,
pmenzel@...gen.mpg.de,
anthony.l.nguyen@...el.com,
Michal Kubiak <michal.kubiak@...el.com>
Subject: [PATCH iwl-next v3 0/3] ice: convert Rx path to Page Pool
This series modernizes the Rx path in the ice driver by removing legacy
code and switching to the Page Pool API. The changes follow the same
direction as previously done for the iavf driver, and aim to simplify
buffer management, improve maintainability, and prepare for future
infrastructure reuse.
An important motivation for this work was addressing reports of poor
performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
incurred significant overhead due to per-frame DMA mapping, which
limited throughput in virtualized environments. This series eliminates
those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
The first patch removes the legacy Rx path, which relied on manual skb
allocation and header copying. This path has become obsolete due to the
availability of build_skb() and the increasing complexity of supporting
features like XDP and multi-buffer.
The second patch drops the page splitting and recycling logic. While
once used to optimize memory usage, this logic introduced significant
complexity and hotpath overhead. Removing it simplifies the Rx flow and
sets the stage for Page Pool adoption.
The final patch switches the driver to use the Page Pool and libeth
APIs. It also updates the XDP implementation to use libeth_xdp helpers
and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
a significant performance improvement in virtualized environments with
IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
performance remains on par with the previous implementation.
This conversion also aligns with the broader effort to modularize and
unify XDP support across Intel Ethernet drivers.
Tested on various workloads including netperf and XDP modes (PASS, DROP,
TX) with and without IOMMU. No regressions observed.
Thanks,
Michal
---
v3:
- Fix the offset calculation for XDP_TX buffers in patch #3 (Larysa).
- Remove more dead code (Olek).
- Remove all hardcodes introduced in the patch #2 (Olek).
- Add an explanation about the performance drop for small MTU on XDP_DROP
action in the commit message for the patch #3.
v2:
- Fix the traffic hang issue on iperf3 testing while MTU=9K is set (Jake).
- Fix crashes on MTU=9K and iperf3 testing (Jake).
- Improve the logic in the Rx path after it was integrated with libeth (Jake & Olek).
- Remove unused variables and structure members (Jake).
- Extract the fix for using a bad allocation counter to a separate patch targeted to "net"
(Paul).
v2: https://lore.kernel.org/intel-wired-lan/20250808155659.1053560-1-michal.kubiak@intel.com/
v1: https://lore.kernel.org/intel-wired-lan/20250704161859.871152-1-michal.kubiak@intel.com/
Michal Kubiak (3):
ice: remove legacy Rx and construct SKB
ice: drop page splitting and recycling
ice: switch to Page Pool
drivers/net/ethernet/intel/Kconfig | 1 +
drivers/net/ethernet/intel/ice/ice.h | 3 +-
drivers/net/ethernet/intel/ice/ice_base.c | 123 ++--
drivers/net/ethernet/intel/ice/ice_ethtool.c | 22 +-
drivers/net/ethernet/intel/ice/ice_lib.c | 1 -
drivers/net/ethernet/intel/ice/ice_main.c | 21 +-
drivers/net/ethernet/intel/ice/ice_txrx.c | 647 +++---------------
drivers/net/ethernet/intel/ice/ice_txrx.h | 125 +---
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 65 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 9 -
drivers/net/ethernet/intel/ice/ice_xsk.c | 146 +---
drivers/net/ethernet/intel/ice/ice_xsk.h | 6 +-
drivers/net/ethernet/intel/ice/virt/queues.c | 5 +-
13 files changed, 212 insertions(+), 962 deletions(-)
--
2.45.2
Powered by blists - more mailing lists