[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMB2axO4ySD2Lo9xzkkYdUqL2tHPcO02-h2HZiWT993wsU3NtA@mail.gmail.com>
Date: Thu, 4 Sep 2025 16:30:15 -0700
From: Amery Hung <ameryhung@...il.com>
To: cpaasch@...nai.com
Cc: Gal Pressman <gal@...dia.com>, Dragos Tatulea <dtatulea@...dia.com>,
Saeed Mahameed <saeedm@...dia.com>, Tariq Toukan <tariqt@...dia.com>, Mark Bloch <mbloch@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Stanislav Fomichev <sdf@...ichev.me>, netdev@...r.kernel.org,
linux-rdma@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH net-next v5 2/2] net/mlx5: Avoid copying payload to the
skb's linear part
On Thu, Sep 4, 2025 at 3:57 PM Christoph Paasch via B4 Relay
<devnull+cpaasch.openai.com@...nel.org> wrote:
>
> From: Christoph Paasch <cpaasch@...nai.com>
>
> mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> bytes from the page-pool to the skb's linear part. Those 256 bytes
> include part of the payload.
>
> When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> (and skb->head_frag is not set), we end up aggregating packets in the
> frag_list.
>
> This is of course not good when we are CPU-limited. Also causes a worse
> skb->len/truesize ratio,...
>
> So, let's avoid copying parts of the payload to the linear part. We use
> eth_get_headlen() to parse the headers and compute the length of the
> protocol headers, which will be used to copy the relevant bits ot the
> skb's linear part.
>
> We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking
> stack needs to call pskb_may_pull() later on, we don't need to reallocate
> memory.
>
> This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> LRO enabled):
>
> BEFORE:
> =======
> (netserver pinned to core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.01 32547.82
>
> (netserver pinned to adjacent core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 52531.67
>
> AFTER:
> ======
> (netserver pinned to core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 52896.06
>
> (netserver pinned to adjacent core receiving interrupts)
> $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> 87380 16384 262144 60.00 85094.90
>
> Additional tests across a larger range of parameters w/ and w/o LRO, w/
> and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different
> TCP read/write-sizes as well as UDP benchmarks, all have shown equal or
> better performance with this patch.
>
> Reviewed-by: Eric Dumazet <edumazet@...gle.com>
> Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
> Signed-off-by: Christoph Paasch <cpaasch@...nai.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index 8bedbda522808cbabc8e62ae91a8c25d66725ebb..0ac31c7fb64cd60720d390de45a5b6b453ed0a3f 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -2047,6 +2047,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> dma_sync_single_for_cpu(rq->pdev, addr + head_offset, headlen,
> rq->buff.map_dir);
>
> + headlen = eth_get_headlen(rq->netdev, head_addr, headlen);
> +
> frag_offset += headlen;
> byte_cnt -= headlen;
> linear_hr = skb_headroom(skb);
> @@ -2123,6 +2125,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> pagep->frags++;
> while (++pagep < frag_page);
> }
> +
> + headlen = eth_get_headlen(rq->netdev, mxbuf->xdp.data, headlen);
> +
The size of mxbuf->xdp.data is most likely not headlen here.
The driver currently generates a xdp_buff with empty linear data, pass
it to the xdp program and assumes the layout If the xdp program does
not change the layout of the xdp_buff through bpf_xdp_adjust_head() or
bpf_xdp_adjust_tail(). The assumption is not correct and I am working
on a fix. But, if we keep that assumption for now, mxbuf->xdp.data
will not contain any headers or payload. The thing that you try to do
probably should be:
skb_frag_t *frag = &sinfo->frags[0];
headlen = eth_get_headlen(rq->netdev, skb_frag_address(frag),
skb_frag_size(frag));
> __pskb_pull_tail(skb, headlen);
> } else {
> if (xdp_buff_has_frags(&mxbuf->xdp)) {
>
> --
> 2.50.1
>
>
Powered by blists - more mailing lists