[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230517075849.2af98d72@kernel.org>
Date: Wed, 17 May 2023 07:58:49 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, Lorenzo Bianconi
<lorenzo.bianconi@...hat.com>, Yunsheng Lin <linyunsheng@...wei.com>,
netdev@...r.kernel.org, bpf@...r.kernel.org, davem@...emloft.net,
edumazet@...gle.com, pabeni@...hat.com, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com
Subject: Re: [RFC net-next] net: veth: reduce page_pool memory footprint
using half page per-buffer
On Wed, 17 May 2023 00:52:25 +0200 Lorenzo Bianconi wrote:
> I am testing this RFC patch in the scenario reported below:
>
> iperf tcp tx --> veth0 --> veth1 (xdp_pass) --> iperf tcp rx
>
> - 6.4.0-rc1 net-next:
> MTU 1500B: ~ 7.07 Gbps
> MTU 8000B: ~ 14.7 Gbps
>
> - 6.4.0-rc1 net-next + page_pool frag support in veth:
> MTU 1500B: ~ 8.57 Gbps
> MTU 8000B: ~ 14.5 Gbps
>
> side note: it seems there is a regression between 6.2.15 and 6.4.0-rc1 net-next
> (even without latest veth page_pool patches) in the throughput I can get in the
> scenario above, but I have not looked into it yet.
>
> - 6.2.15:
> MTU 1500B: ~ 7.91 Gbps
> MTU 8000B: ~ 14.1 Gbps
>
> - 6.4.0-rc1 net-next w/o commits [0],[1],[2]
> MTU 1500B: ~ 6.38 Gbps
> MTU 8000B: ~ 13.2 Gbps
If the benchmark is iperf, wouldn't working towards preserving GSO
status across XDP (assuming prog is multi-buf-capable) be the most
beneficial optimization?
Powered by blists - more mailing lists