[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <55a752e9-faf4-2b37-5492-c58dee3c170c@intel.com>
Date: Thu, 16 Mar 2023 12:57:55 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>
CC: Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Song Liu <song@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Menglong Dong <imagedong@...cent.com>,
Mykola Lysenko <mykolal@...com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v3 0/4] xdp: recycle Page Pool backed skbs built
from XDP frames
From: Alexander Lobakin <aleksander.lobakin@...el.com>
Date: Mon, 13 Mar 2023 22:42:56 +0100
> Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.
>
> __xdp_build_skb_from_frame() missed the moment when the networking stack
> became able to recycle skb pages backed by a page_pool. This was making
> e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
> also affected in some scenarios.
> A lot of drivers use skb_mark_for_recycle() already, it's been almost
> two years and seems like there are no issues in using it in the generic
> code too. {__,}xdp_release_frame() can be then removed as it losts its
> last user.
> Page Pool becomes then zero-alloc (or almost) in the abovementioned
> cases, too. Other memory type models (who needs them at this point)
> have no changes.
Sorry, our SMTP proxy went crazy and resent several times all my
messages sent via git-send-email during the last couple days. Please
ignore this.
>
> Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
> IPv6 UDP, iavf w/XDP[0] (CONFIG_PAGE_POOL_STATS is enabled):
>
> Plain %XDP_PASS on baseline, Page Pool driver:
>
> src cpu Rx drops dst cpu Rx
> 2.1 Mpps N/A 2.1 Mpps
>
> cpumap redirect (cross-core, w/o leaving its NUMA node) on baseline:
>
> 6.8 Mpps 5.0 Mpps 1.8 Mpps
>
> cpumap redirect with skb PP recycling:
>
> 7.9 Mpps 5.7 Mpps 2.2 Mpps
> +22% (from cpumap redir on baseline)
>
> [0] https://github.com/alobakin/linux/commits/iavf-xdp
>
> Alexander Lobakin (4):
> selftests/bpf: robustify test_xdp_do_redirect with more payload magics
> net: page_pool, skbuff: make skb_mark_for_recycle() always available
> xdp: recycle Page Pool backed skbs built from XDP frames
> xdp: remove unused {__,}xdp_release_frame()
>
> include/linux/skbuff.h | 4 +--
> include/net/xdp.h | 29 ---------------
> net/core/xdp.c | 19 ++--------
> .../bpf/progs/test_xdp_do_redirect.c | 36 +++++++++++++------
> 4 files changed, 30 insertions(+), 58 deletions(-)
>
> ---
> From v2[1]:
> * fix the test_xdp_do_redirect selftest failing after the series: it was
> relying on that %XDP_PASS frames can't be recycled on veth
> (BPF CI, Alexei);
> * explain "w/o leaving its node" in the cover letter (Jesper).
>
> From v1[2]:
> * make skb_mark_for_recycle() always available, otherwise there are build
> failures on non-PP systems (kbuild bot);
> * 'Page Pool' -> 'page_pool' when it's about a page_pool instance, not
> API (Jesper);
> * expanded test system info a bit in the cover letter (Jesper).
>
> [1] https://lore.kernel.org/bpf/20230303133232.2546004-1-aleksander.lobakin@intel.com
> [2] https://lore.kernel.org/bpf/20230301160315.1022488-1-aleksander.lobakin@intel.com
Thanks,
Olek
Powered by blists - more mailing lists