[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDXi3VpAOPHQ576e@mini-arch>
Date: Tue, 27 May 2025 09:05:49 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Tariq Toukan <tariqt@...dia.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>,
Richard Cochran <richardcochran@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, Moshe Shemesh <moshe@...dia.com>,
Mark Bloch <mbloch@...dia.com>, Gal Pressman <gal@...dia.com>,
Cosmin Ratiu <cratiu@...dia.com>,
Dragos Tatulea <dtatulea@...dia.com>
Subject: Re: [PATCH net-next V2 00/11] net/mlx5e: Add support for devmem and
io_uring TCP zero-copy
On 05/23, Tariq Toukan wrote:
> This series from the team adds support for zerocopy rx TCP with devmem
> and io_uring for ConnectX7 NICs and above. For performance reasons and
> simplicity HW-GRO will also be turned on when header-data split mode is
> on.
>
> Find more details below.
>
> Regards,
> Tariq
>
> Performance
> ===========
>
> Test setup:
>
> * CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz (single NUMA)
> * NIC: ConnectX7
> * Benchmarking tool: kperf [1]
> * Single TCP flow
> * Test duration: 60s
>
> With application thread and interrupts pinned to the *same* core:
>
> |------+-----------+----------|
> | MTU | epoll | io_uring |
> |------+-----------+----------|
> | 1500 | 61.6 Gbps | 114 Gbps |
> | 4096 | 69.3 Gbps | 151 Gbps |
> | 9000 | 67.8 Gbps | 187 Gbps |
> |------+-----------+----------|
>
> The CPU usage for io_uring is 95%.
>
> Reproduction steps for io_uring:
>
> server --no-daemon -a 2001:db8::1 --no-memcmp --iou --iou_sendzc \
> --iou_zcrx --iou_dev_name eth2 --iou_zcrx_queue_id 2
>
> server --no-daemon -a 2001:db8::2 --no-memcmp --iou --iou_sendzc
>
> client --src 2001:db8::2 --dst 2001:db8::1 \
> --msg-zerocopy -t 60 --cpu-min=2 --cpu-max=2
>
> Patch overview:
> ================
>
> First, a netmem API for skb_can_coalesce is added to the core to be able
> to do skb fragment coalescing on netmems.
>
> The next patches introduce some cleanups in the internal SHAMPO code and
> improvements to hw gro capability checks in FW.
>
> A separate page_pool is introduced for headers. Ethtool stats are added
> as well.
>
> Then the driver is converted to use the netmem API and to allow support
> for unreadable netmem page pool.
>
> The queue management ops are implemented.
>
> Finally, the tcp-data-split ring parameter is exposed.
>
> Changelog
> =========
>
> Changes from v1 [0]:
> - Added support for skb_can_coalesce_netmem().
> - Avoid netmem_to_page() casts in the driver.
> - Fixed code to abide 80 char limit with some exceptions to avoid
> code churn.
Since there is gonna be 2-3 weeks of closed net-next, can you
also add a patch for the tx side? It should be trivial (skip dma unmap
for niovs in tx completions plus netdev->netmem_tx=1).
And, btw, what about the issue that Cosmin raised in [0]? Is it addressed
in this series?
0: https://lore.kernel.org/netdev/9322c3c4826ed1072ddc9a2103cc641060665864.camel@nvidia.com/
Powered by blists - more mailing lists