[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZW3zvEbI6o4ydM_N@lore-desk>
Date: Mon, 4 Dec 2023 16:43:56 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: aleksander.lobakin@...el.com, netdev@...r.kernel.org,
davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
lorenzo.bianconi@...hat.com, bpf@...r.kernel.org, hawk@...nel.org,
toke@...hat.com, willemdebruijn.kernel@...il.com,
jasowang@...hat.com, sdf@...gle.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp
running in generic mode
> On Fri, 1 Dec 2023 14:48:26 +0100 Lorenzo Bianconi wrote:
> > Similar to native xdp, do not always linearize the skb in
> > netif_receive_generic_xdp routine but create a non-linear xdp_buff to be
> > processed by the eBPF program. This allow to add multi-buffer support
> > for xdp running in generic mode.
>
> Hm. How close is the xdp generic code to veth?
Actually they are quite close, the only difference is the use of page_pool vs
page_frag_cache APIs.
> I wonder if it'd make sense to create a page pool instance for each
> core, we could then pass it into a common "reallocate skb into a
> page-pool backed, fragged form" helper. Common between this code
> and veth? Perhaps we could even get rid of the veth page pools
> and use the per cpu pools there?
yes, I was thinking about it actually.
I run some preliminary tests to check if we are introducing any performance
penalties or so.
My setup relies on a couple of veth pairs and an eBPF program to perform
XDP_REDIRECT from one pair to another one. I am running the program in xdp
driver mode (not generic one).
v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01 v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11
v00: iperf3 client
v11: iperf3 server
I am run the test with different MTU valeus (1500B, 8KB, 64KB)
net-next veth codebase:
=======================
- MTU 1500: iperf3 ~ 4.37Gbps
- MTU 8000: iperf3 ~ 9.75Gbps
- MTU 64000: iperf3 ~ 11.24Gbps
net-next veth codebase + page_frag_cache instead of page_pool:
==============================================================
- MTU 1500: iperf3 ~ 4.99Gbps (+14%)
- MTU 8000: iperf3 ~ 8.5Gbps (-12%)
- MTU 64000: iperf3 ~ 11.9Gbps ( +6%)
It seems there is no a clear win situation of using page_pool or
page_frag_cache. What do you think?
Regards,
Lorenzo
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists