[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230313190813.1036595-1-aleksander.lobakin@intel.com>
Date: Mon, 13 Mar 2023 20:08:09 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>
Cc: Alexander Lobakin <aleksander.lobakin@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Song Liu <song@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Menglong Dong <imagedong@...cent.com>,
Mykola Lysenko <mykolal@...com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, bpf@...r.kernel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH bpf-next v3 0/4] xdp: recycle Page Pool backed skbs built from XDP frames
Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
__xdp_build_skb_from_frame() missed the moment when the networking stack
became able to recycle skb pages backed by a page_pool. This was making
e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
also affected in some scenarios.
A lot of drivers use skb_mark_for_recycle() already, it's been almost
two years and seems like there are no issues in using it in the generic
code too. {__,}xdp_release_frame() can be then removed as it losts its
last user.
Page Pool becomes then zero-alloc (or almost) in the abovementioned
cases, too. Other memory type models (who needs them at this point)
have no changes.
Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled):
Plain %XDP_PASS on baseline, Page Pool driver:
src cpu Rx drops dst cpu Rx
2.1 Mpps N/A 2.1 Mpps
cpumap redirect (cross-core, w/o leaving its NUMA node) on baseline:
6.8 Mpps 5.0 Mpps 1.8 Mpps
cpumap redirect with skb PP recycling:
7.9 Mpps 5.7 Mpps 2.2 Mpps
+22% (from cpumap redir on baseline)
[0] https://github.com/alobakin/linux/commits/iavf-xdp
Alexander Lobakin (4):
selftests/bpf: robustify test_xdp_do_redirect with more payload magics
net: page_pool, skbuff: make skb_mark_for_recycle() always available
xdp: recycle Page Pool backed skbs built from XDP frames
xdp: remove unused {__,}xdp_release_frame()
include/linux/skbuff.h | 4 +--
include/net/xdp.h | 29 ---------------
net/core/xdp.c | 19 ++--------
.../bpf/progs/test_xdp_do_redirect.c | 36 +++++++++++++------
4 files changed, 30 insertions(+), 58 deletions(-)
---
>From v2[1]:
* fix the test_xdp_do_redirect selftest failing after the series: it was
relying on that %XDP_PASS frames can't be recycled on veth
(BPF CI, Alexei);
* explain "w/o leaving its node" in the cover letter (Jesper).
>From v1[2]:
* make skb_mark_for_recycle() always available, otherwise there are build
failures on non-PP systems (kbuild bot);
* 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not
API (Jesper);
* expanded test system info a bit in the cover letter (Jesper).
[1] https://lore.kernel.org/bpf/20230303133232.2546004-1-aleksander.lobakin@intel.com
[2] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com
--
2.39.2
Powered by blists - more mailing lists