[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADg4-L-YRbFeDsmeREZKJpe2aZ4g+LXbxNTPe_nCJ=7v3jgTgg@mail.gmail.com>
Date: Mon, 14 Jul 2025 15:22:34 -0700
From: Christoph Paasch <cpaasch@...nai.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>,
Mark Bloch <mbloch@...dia.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, linux-rdma@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next 2/2] net/mlx5: Avoid copying payload to the skb's
linear part
On Mon, Jul 14, 2025 at 7:23 AM Alexander Lobakin
<aleksander.lobakin@...el.com> wrote:
>
> From: Christoph Paasch Via B4 Relay <devnull+cpaasch.openai.com@...nel.org>
> Date: Sun, 13 Jul 2025 16:33:07 -0700
>
> > From: Christoph Paasch <cpaasch@...nai.com>
> >
> > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> > bytes from the page-pool to the skb's linear part. Those 256 bytes
> > include part of the payload.
> >
> > When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> > (and skb->head_frag is not set), we end up aggregating packets in the
>
> How did you end up with ->head_frag not set? IIRC mlx5 uses
> napi_build_skb(), which explicitly sets ->head_frag to true.
> It should be false only for kmalloced linear parts.
This particular code-path calls napi_alloc_skb() which ends up calling
__alloc_skb() and won't set head_frag to 1.
> > frag_list.
> >
> > This is of course not good when we are CPU-limited. Also causes a worse
> > skb->len/truesize ratio,...
> >
> > So, let's avoid copying parts of the payload to the linear part. The
> > goal here is to err on the side of caution and prefer to copy too little
> > instead of copying too much (because once it has been copied over, we
> > trigger the above described behavior in skb_gro_receive).
> >
> > So, we can do a rough estimate of the header-space by looking at
> > cqe_l3/l4_hdr_type and kind of do a lower-bound estimate. This is now
> > done in mlx5e_cqe_get_min_hdr_len(). We always assume that TCP timestamps
> > are present, as that's the most common use-case.
> >
> > That header-len is then used in mlx5e_skb_from_cqe_mpwrq_nonlinear for
> > the headlen (which defines what is being copied over). We still
> > allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking stack
> > needs to call pskb_may_pull() later on, we don't need to reallocate
> > memory.
> >
> > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> > LRO enabled):
> >
> > BEFORE:
> > =======
> > (netserver pinned to core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > 87380 16384 262144 60.01 32547.82
> >
> > (netserver pinned to adjacent core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > 87380 16384 262144 60.00 52531.67
> >
> > AFTER:
> > ======
> > (netserver pinned to core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > 87380 16384 262144 60.00 52896.06
> >
> > (netserver pinned to adjacent core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > 87380 16384 262144 60.00 85094.90
> >
> > Signed-off-by: Christoph Paasch <cpaasch@...nai.com>
> > ---
> > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 33 ++++++++++++++++++++++++-
> > 1 file changed, 32 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > index 2bb32082bfccdc85d26987f792eb8c1047e44dd0..2de669707623882058e3e77f82d74893e5d6fefe 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > @@ -1986,13 +1986,40 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
> > } while (data_bcnt);
> > }
> >
> > +static u16
> > +mlx5e_cqe_get_min_hdr_len(const struct mlx5_cqe64 *cqe)
> > +{
> > + u16 min_hdr_len = sizeof(struct ethhdr);
> > + u8 l3_type = get_cqe_l3_hdr_type(cqe);
> > + u8 l4_type = get_cqe_l4_hdr_type(cqe);
> > +
> > + if (cqe_has_vlan(cqe))
> > + min_hdr_len += VLAN_HLEN;
>
> Can't Q-in-Q be here?
Yes, see my reply below.
>
> > +
> > + if (l3_type == CQE_L3_HDR_TYPE_IPV4)
> > + min_hdr_len += sizeof(struct iphdr);
> > + else if (l3_type == CQE_L3_HDR_TYPE_IPV6)
> > + min_hdr_len += sizeof(struct ipv6hdr);
>
> You don't account extensions and stuff here.
Yes - see my reply below.
>
> > +
> > + if (l4_type == CQE_L4_HDR_TYPE_UDP)
> > + min_hdr_len += sizeof(struct udphdr);
> > + else if (l4_type & (CQE_L4_HDR_TYPE_TCP_NO_ACK |
> > + CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA |
> > + CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA))
> > + /* Previous condition works because we know that
> > + * l4_type != 0x2 (CQE_L4_HDR_TYPE_UDP)
> > + */
> > + min_hdr_len += sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
> > +
> > + return min_hdr_len;
> > +}
> > +
> > static struct sk_buff *
> > mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
> > struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
> > u32 page_idx)
> > {
> > struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
> > - u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
> > struct mlx5e_frag_page *head_page = frag_page;
> > struct mlx5e_xdp_buff *mxbuf = &rq->mxbuf;
> > u32 frag_offset = head_offset;
> > @@ -2004,10 +2031,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > u32 linear_frame_sz;
> > u16 linear_data_len;
> > u16 linear_hr;
> > + u16 headlen;
> > void *va;
> >
> > prog = rcu_dereference(rq->xdp_prog);
> >
> > + headlen = min3(mlx5e_cqe_get_min_hdr_len(cqe), cqe_bcnt,
> > + (u16)MLX5E_RX_MAX_HEAD);
>
> For your usecase, have you tried setting headlen to just ETH_HLEN here?
> Fast GRO should still work for this case, then VLAN/IP/L4 layers will
> do a couple memcpy()s to pull their headers, but even on 32-bit MIPS
> this was faster than let's say eth_get_headlen() (which involves Flow
> Dissector) or open-coded header length assumptions as above.
>
> (the above was correct for 2020 when I last time played with router
> drivers, but I hope nothing's been broken since then)
Yes, as you correctly point out, it is all about avoiding to copy any
payload to have fast GRO.
I can give it a shot of just copying eth_hlen. And see what perf I
get. You are probably right that it won't matter much. I just thought
that as I have the bits in the cqe that give me some hints on what
headers are present, I can just be slightly more efficient.
Thanks,
Christoph
>
> > +
> > if (prog) {
> > /* area for bpf_xdp_[store|load]_bytes */
> > net_prefetchw(netmem_address(frag_page->netmem) + frag_offset);
>
> Thanks,
> Olek
Powered by blists - more mailing lists