[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250911162233.1238034-1-aleksander.lobakin@intel.com>
Date: Thu, 11 Sep 2025 18:22:28 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: intel-wired-lan@...ts.osuosl.org
Cc: Alexander Lobakin <aleksander.lobakin@...el.com>,
Michal Kubiak <michal.kubiak@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Simon Horman <horms@...nel.org>,
nxne.cnse.osdt.itp.upstreaming@...el.com,
bpf@...r.kernel.org,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH iwl-next 0/5] idpf: add XSk support
Add support for XSk xmit and receive using libeth_xdp.
This includes adding interfaces to reconfigure/enable/disable only
a particular set of queues and support for checksum offload XSk Tx
metadata.
libeth_xdp's implementation mostly matches the one of ice: batched
allocations and sending, unrolled descriptor writes etc. But unlike
other Intel drivers, XSk wakeup is implemented using CSD/IPI instead
of HW "software interrupt". In lots of different tests, this yielded
way better perf than SW interrupts, but also, this gives better
control over which CPU will handle the NAPI loop (SW interrupts are
a subject to irqbalance and stuff, while CSDs are strictly pinned
1:1 to the core of the same index).
Note that the header split is always disabled for XSk queues, as
for now we see no reasons to have it there.
XSk xmit perf is up to 3x comparing to ice. XSk XDP_PASS is also
faster a bunch as it uses system percpu page_pools, so that the
only overlead left is memcpy(). The rest is at least comparable.
Alexander Lobakin (3):
idpf: implement XSk xmit
idpf: implement Rx path for AF_XDP
idpf: enable XSk features and ndo_xsk_wakeup
Michal Kubiak (2):
idpf: add virtchnl functions to manage selected queues
idpf: add XSk pool initialization
drivers/net/ethernet/intel/idpf/Makefile | 1 +
drivers/net/ethernet/intel/idpf/idpf.h | 7 +
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 72 +-
.../net/ethernet/intel/idpf/idpf_virtchnl.h | 32 +-
drivers/net/ethernet/intel/idpf/xdp.h | 3 +
drivers/net/ethernet/intel/idpf/xsk.h | 33 +
.../net/ethernet/intel/idpf/idpf_ethtool.c | 8 +-
drivers/net/ethernet/intel/idpf/idpf_lib.c | 10 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 451 ++++++-
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 1160 +++++++++++------
drivers/net/ethernet/intel/idpf/xdp.c | 44 +-
drivers/net/ethernet/intel/idpf/xsk.c | 633 +++++++++
12 files changed, 1977 insertions(+), 477 deletions(-)
create mode 100644 drivers/net/ethernet/intel/idpf/xsk.h
create mode 100644 drivers/net/ethernet/intel/idpf/xsk.c
---
Apply to either net-next or next-queue, but *before* Pavan's series.
Testing hints:
For testing XSk, you can use basic xdpsock from [0]. There are 3 modes:
`rxdrop` will check XSk Rx, `txonly` -- XSk xmit, `l2fwd` takes care of
both. You can run several instances on different queues.
To get the best perf, make sure xdpsock isn't run on the same CPU which
is responsible for the corresponding NIC queue handling (official XSk
documentation).
[0] https://github.com/xdp-project/bpf-examples/tree/main/AF_XDP-example
--
2.51.0
Powered by blists - more mailing lists