[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231204120153.0d51729a@kernel.org>
Date: Mon, 4 Dec 2023 12:01:53 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: aleksander.lobakin@...el.com, netdev@...r.kernel.org,
davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
lorenzo.bianconi@...hat.com, bpf@...r.kernel.org, hawk@...nel.org,
toke@...hat.com, willemdebruijn.kernel@...il.com, jasowang@...hat.com,
sdf@...gle.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp
running in generic mode
On Mon, 4 Dec 2023 16:43:56 +0100 Lorenzo Bianconi wrote:
> yes, I was thinking about it actually.
> I run some preliminary tests to check if we are introducing any performance
> penalties or so.
> My setup relies on a couple of veth pairs and an eBPF program to perform
> XDP_REDIRECT from one pair to another one. I am running the program in xdp
> driver mode (not generic one).
>
> v00 (NS:ns0 - 192.168.0.1/24) <---> (NS:ns1 - 192.168.0.2/24) v01 v10 (NS:ns1 - 192.168.1.1/24) <---> (NS:ns2 - 192.168.1.2/24) v11
>
> v00: iperf3 client
> v11: iperf3 server
>
> I am run the test with different MTU valeus (1500B, 8KB, 64KB)
>
> net-next veth codebase:
> =======================
> - MTU 1500: iperf3 ~ 4.37Gbps
> - MTU 8000: iperf3 ~ 9.75Gbps
> - MTU 64000: iperf3 ~ 11.24Gbps
>
> net-next veth codebase + page_frag_cache instead of page_pool:
> ==============================================================
> - MTU 1500: iperf3 ~ 4.99Gbps (+14%)
> - MTU 8000: iperf3 ~ 8.5Gbps (-12%)
> - MTU 64000: iperf3 ~ 11.9Gbps ( +6%)
>
> It seems there is no a clear win situation of using page_pool or
> page_frag_cache. What do you think?
Hm, interesting. Are the iperf processes running on different cores?
May be worth pinning (both same and different) to make sure the cache
effects are isolated.
Powered by blists - more mailing lists