lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADg4-L8a-FbB4A_dttfQXYuvOdnEcrYOYi4fG_7KBBWfaLL_ag@mail.gmail.com>
Date: Wed, 10 Sep 2025 10:36:36 -0700
From: Christoph Paasch <cpaasch@...nai.com>
To: Amery Hung <ameryhung@...il.com>
Cc: Gal Pressman <gal@...dia.com>, Dragos Tatulea <dtatulea@...dia.com>, 
	Saeed Mahameed <saeedm@...dia.com>, Tariq Toukan <tariqt@...dia.com>, Mark Bloch <mbloch@...dia.com>, 
	Leon Romanovsky <leon@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>, 
	John Fastabend <john.fastabend@...il.com>, Stanislav Fomichev <sdf@...ichev.me>, netdev@...r.kernel.org, 
	linux-rdma@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH net-next v5 2/2] net/mlx5: Avoid copying payload to the
 skb's linear part

On Tue, Sep 9, 2025 at 8:17 PM Amery Hung <ameryhung@...il.com> wrote:
>
> On Tue, Sep 9, 2025 at 11:18 AM Christoph Paasch <cpaasch@...nai.com> wrote:
> >
> > On Mon, Sep 8, 2025 at 9:00 PM Christoph Paasch <cpaasch@...nai.com> wrote:
> > >
> > > On Thu, Sep 4, 2025 at 4:30 PM Amery Hung <ameryhung@...il.com> wrote:
> > > >
> > > > On Thu, Sep 4, 2025 at 3:57 PM Christoph Paasch via B4 Relay
> > > > <devnull+cpaasch.openai.com@...nel.org> wrote:
> > > > >
> > > > > From: Christoph Paasch <cpaasch@...nai.com>
> > > > >
> > > > > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> > > > > bytes from the page-pool to the skb's linear part. Those 256 bytes
> > > > > include part of the payload.
> > > > >
> > > > > When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> > > > > (and skb->head_frag is not set), we end up aggregating packets in the
> > > > > frag_list.
> > > > >
> > > > > This is of course not good when we are CPU-limited. Also causes a worse
> > > > > skb->len/truesize ratio,...
> > > > >
> > > > > So, let's avoid copying parts of the payload to the linear part. We use
> > > > > eth_get_headlen() to parse the headers and compute the length of the
> > > > > protocol headers, which will be used to copy the relevant bits ot the
> > > > > skb's linear part.
> > > > >
> > > > > We still allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking
> > > > > stack needs to call pskb_may_pull() later on, we don't need to reallocate
> > > > > memory.
> > > > >
> > > > > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> > > > > LRO enabled):
> > > > >
> > > > > BEFORE:
> > > > > =======
> > > > > (netserver pinned to core receiving interrupts)
> > > > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > > > >  87380  16384 262144    60.01    32547.82
> > > > >
> > > > > (netserver pinned to adjacent core receiving interrupts)
> > > > > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > > > >  87380  16384 262144    60.00    52531.67
> > > > >
> > > > > AFTER:
> > > > > ======
> > > > > (netserver pinned to core receiving interrupts)
> > > > > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> > > > >  87380  16384 262144    60.00    52896.06
> > > > >
> > > > > (netserver pinned to adjacent core receiving interrupts)
> > > > >  $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> > > > >  87380  16384 262144    60.00    85094.90
> > > > >
> > > > > Additional tests across a larger range of parameters w/ and w/o LRO, w/
> > > > > and w/o IPv6-encapsulation, different MTUs (1500, 4096, 9000), different
> > > > > TCP read/write-sizes as well as UDP benchmarks, all have shown equal or
> > > > > better performance with this patch.
> > > > >
> > > > > Reviewed-by: Eric Dumazet <edumazet@...gle.com>
> > > > > Reviewed-by: Saeed Mahameed <saeedm@...dia.com>
> > > > > Signed-off-by: Christoph Paasch <cpaasch@...nai.com>
> > > > > ---
> > > > >  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 5 +++++
> > > > >  1 file changed, 5 insertions(+)
> > > > >
> > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > > > index 8bedbda522808cbabc8e62ae91a8c25d66725ebb..0ac31c7fb64cd60720d390de45a5b6b453ed0a3f 100644
> > > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > > > > @@ -2047,6 +2047,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > > > >                 dma_sync_single_for_cpu(rq->pdev, addr + head_offset, headlen,
> > > > >                                         rq->buff.map_dir);
> > > > >
> > > > > +               headlen = eth_get_headlen(rq->netdev, head_addr, headlen);
> > > > > +
> > > > >                 frag_offset += headlen;
> > > > >                 byte_cnt -= headlen;
> > > > >                 linear_hr = skb_headroom(skb);
> > > > > @@ -2123,6 +2125,9 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> > > > >                                 pagep->frags++;
> > > > >                         while (++pagep < frag_page);
> > > > >                 }
> > > > > +
> > > > > +               headlen = eth_get_headlen(rq->netdev, mxbuf->xdp.data, headlen);
> > > > > +
> > > >
> > > > The size of mxbuf->xdp.data is most likely not headlen here.
> > > >
> > > > The driver currently generates a xdp_buff with empty linear data, pass
> > > > it to the xdp program and assumes the layout If the xdp program does
> > > > not change the layout of the xdp_buff through bpf_xdp_adjust_head() or
> > > > bpf_xdp_adjust_tail(). The assumption is not correct and I am working
> > > > on a fix. But, if we keep that assumption for now, mxbuf->xdp.data
> > > > will not contain any headers or payload. The thing that you try to do
> > > > probably should be:
> > > >
> > > >         skb_frag_t *frag = &sinfo->frags[0];
> > > >
> > > >         headlen = eth_get_headlen(rq->netdev, skb_frag_address(frag),
> > > > skb_frag_size(frag));
> >
> > So, when I look at the headlen I get, it is correct (even with my old
> > code using mxbuf->xdp.data).
> >
> > To make sure I test the right thing, which scenario would
> > mxbuf->xdp.data not contain any headers or payload ? What do I need to
> > do to reproduce that ?
>
> A quick look at the code, could it be that
> skb_flow_dissect_flow_keys_basic() returns false so that
> eth_get_headlen() always returns sizeof(*eth)?

No, the headlen values were correct (meaning, it was the actual length
of the headers):

This is TCP-traffic with a simple print after eth_get_headlen:
[130982.311088] mlx5e_skb_from_cqe_mpwrq_nonlinear xdp headlen is 86

So, eth_get_headlen was able to correctly parse things.

My xdp-program is as simple as possible:
SEC("xdp.frags")
int xdp_pass_prog(struct xdp_md *ctx)
{
    return XDP_PASS;
}


> The linear part
> contains nothing meaning before __psk_pull_tail(), so it is possible
> for skb_flow_dissect_flow_keys_basic() to fail.
>
> >
> > Thanks,
> > Christoph
> >
> > >
> > > Ok, I think I understand what you mean! Thanks for taking the time to explain!
> > >
> > > I will do some tests on my side to make sure I get it right.
> > >
> > > As your change goes to net and mine to netnext, I can wait until yours
> > > is in the tree so that there aren't any conflicts that need to be
> > > taken care of.
>
> Will Copy you in the mlx5 non-linear xdp fixing patchset.

Thx!


Christoph

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ