[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1604686496.git.lorenzo@kernel.org>
Date: Fri, 6 Nov 2020 19:19:06 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, lorenzo.bianconi@...hat.com,
davem@...emloft.net, kuba@...nel.org, brouer@...hat.com,
ilias.apalodimas@...aro.org
Subject: [PATCH v4 net-next 0/5] xdp: introduce bulking for page_pool tx return path
XDP bulk APIs introduce a defer/flush mechanism to return
pages belonging to the same xdp_mem_allocator object
(identified via the mem.id field) in bulk to optimize
I-cache and D-cache since xdp_return_frame is usually run
inside the driver NAPI tx completion loop.
Convert mvneta, mvpp2 and mlx5 drivers to xdp_return_frame_bulk APIs.
Changes since v3:
- align DEV_MAP_BULK_SIZE to XDP_BULK_QUEUE_SIZE
- refactor page_pool_put_page_bulk to avoid code duplication
Changes since v2:
- move mvneta changes in a dedicated patch
Changes since v1:
- improve comments
- rework xdp_return_frame_bulk routine logic
- move count and xa fields at the beginning of xdp_frame_bulk struct
- invert logic in page_pool_put_page_bulk for loop
Lorenzo Bianconi (5):
net: xdp: introduce bulking for xdp tx return path
net: page_pool: add bulk support for ptr_ring
net: mvneta: add xdp tx return bulking support
net: mvpp2: add xdp tx return bulking support
net: mlx5: add xdp tx return bulking support
drivers/net/ethernet/marvell/mvneta.c | 5 +-
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 5 +-
.../net/ethernet/mellanox/mlx5/core/en/xdp.c | 5 +-
include/net/page_pool.h | 26 ++++++++
include/net/xdp.h | 11 +++-
net/core/page_pool.c | 66 ++++++++++++++++---
net/core/xdp.c | 56 ++++++++++++++++
7 files changed, 160 insertions(+), 14 deletions(-)
--
2.26.2
Powered by blists - more mailing lists