lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADg4-L9XoY_dwqicTLb62xbiy3+b3Wwf__qX97WSA9S8tuNjjQ@mail.gmail.com>
Date: Mon, 21 Jul 2025 14:44:17 -0700
From: Christoph Paasch <cpaasch@...nai.com>
To: Tariq Toukan <ttoukan.linux@...il.com>
Cc: Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>, 
	Mark Bloch <mbloch@...dia.com>, Andrew Lunn <andrew+netdev@...n.ch>, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, linux-rdma@...r.kernel.org, 
	netdev@...r.kernel.org
Subject: Re: [PATCH net-next 2/2] net/mlx5: Avoid copying payload to the skb's
 linear part

Hello!

On Mon, Jul 14, 2025 at 12:29 AM Tariq Toukan <ttoukan.linux@...il.com> wrote:
>
>
>
> On 14/07/2025 2:33, Christoph Paasch via B4 Relay wrote:
> > From: Christoph Paasch <cpaasch@...nai.com>
> >
> > mlx5e_skb_from_cqe_mpwrq_nonlinear() copies MLX5E_RX_MAX_HEAD (256)
> > bytes from the page-pool to the skb's linear part. Those 256 bytes
> > include part of the payload.
> >
> > When attempting to do GRO in skb_gro_receive, if headlen > data_offset
> > (and skb->head_frag is not set), we end up aggregating packets in the
> > frag_list.
> >
> > This is of course not good when we are CPU-limited. Also causes a worse
> > skb->len/truesize ratio,...
> >
> > So, let's avoid copying parts of the payload to the linear part. The
> > goal here is to err on the side of caution and prefer to copy too little
> > instead of copying too much (because once it has been copied over, we
> > trigger the above described behavior in skb_gro_receive).
> >
> > So, we can do a rough estimate of the header-space by looking at
> > cqe_l3/l4_hdr_type and kind of do a lower-bound estimate. This is now
> > done in mlx5e_cqe_get_min_hdr_len(). We always assume that TCP timestamps
> > are present, as that's the most common use-case.
> >
> > That header-len is then used in mlx5e_skb_from_cqe_mpwrq_nonlinear for
> > the headlen (which defines what is being copied over). We still
> > allocate MLX5E_RX_MAX_HEAD for the skb so that if the networking stack
> > needs to call pskb_may_pull() later on, we don't need to reallocate
> > memory.
> >
> > This gives a nice throughput increase (ARM Neoverse-V2 with CX-7 NIC and
> > LRO enabled):
> >
> > BEFORE:
> > =======
> > (netserver pinned to core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> >   87380  16384 262144    60.01    32547.82
> >
> > (netserver pinned to adjacent core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> >   87380  16384 262144    60.00    52531.67
> >
> > AFTER:
> > ======
> > (netserver pinned to core receiving interrupts)
> > $ netperf -H 10.221.81.118 -T 80,9 -P 0 -l 60 -- -m 256K -M 256K
> >   87380  16384 262144    60.00    52896.06
> >
> > (netserver pinned to adjacent core receiving interrupts)
> >   $ netperf -H 10.221.81.118 -T 80,10 -P 0 -l 60 -- -m 256K -M 256K
> >   87380  16384 262144    60.00    85094.90
> >
>
> Nice improvement.
>
> Did you test impact on other archs?
>
> Did you test impact on non-LRO flows?
> Specifically:
> a. Large MTU, tcp stream.
> b. Large MTU, small UDP packets.

took a minute, but I have extended my benchmarks to a much larger test matrix:

With / Without LRO
With / Without IPv6 encap
MTU: 1500, 4096, 9000
IRQs on same core as the app / IRQs on adjacent core as the app
TCP with write/read-size 64KB and 512KB
UDP with 64B and 1400B

A full matrix across all of the above for a total of 96 tests.

No measurable significant regressions (10% threshold).

Numerous improvements (above 10% threshold) in the TCP workloads:

  TCP 512-Kbyte, core 8, MTU 1500, LRO on, tunnel off     49810.51  ->
   61924.39  ( +24.3% ↑)
  TCP 512-Kbyte, core 8, MTU 1500, LRO on, tunnel on      24897.29  ->
   42404.18  ( +70.3% ↑)
  TCP 512-Kbyte, core 8, MTU 4096, LRO off, tunnel on     35218.00  ->
   41608.82  ( +18.1% ↑)
  TCP 512-Kbyte, core 8, MTU 4096, LRO on, tunnel on      25056.58  ->
   42231.90  ( +68.5% ↑)
  TCP 512-Kbyte, core 8, MTU 9000, LRO off, tunnel off    38688.81  ->
   50152.49  ( +29.6% ↑)
  TCP 512-Kbyte, core 8, MTU 9000, LRO off, tunnel on     23067.36  ->
   42593.14  ( +84.6% ↑)
  TCP 512-Kbyte, core 8, MTU 9000, LRO on, tunnel on      24671.25  ->
   41276.60  ( +67.3% ↑)
  TCP 512-Kbyte, core 9, MTU 1500, LRO on, tunnel on      25078.41  ->
   42473.55  ( +69.4% ↑)
  TCP 512-Kbyte, core 9, MTU 4096, LRO off, tunnel off    36962.68  ->
   40727.38  ( +10.2% ↑)
  TCP 512-Kbyte, core 9, MTU 4096, LRO on, tunnel on      24890.12  ->
   42248.13  ( +69.7% ↑)
  TCP 512-Kbyte, core 9, MTU 9000, LRO off, tunnel off    45620.36  ->
   58454.83  ( +28.1% ↑)
  TCP 512-Kbyte, core 9, MTU 9000, LRO off, tunnel on     23006.81  ->
   42985.67  ( +86.8% ↑)
  TCP 512-Kbyte, core 9, MTU 9000, LRO on, tunnel on      24539.75  ->
   42295.60  ( +72.4% ↑)
  TCP 64-Kbyte, core 8, MTU 1500, LRO on, tunnel off      38187.87  ->
   45568.38  ( +19.3% ↑)
  TCP 64-Kbyte, core 8, MTU 1500, LRO on, tunnel on       22683.89  ->
   43351.23  ( +91.1% ↑)
  TCP 64-Kbyte, core 8, MTU 4096, LRO on, tunnel on       23653.41  ->
   43988.30  ( +86.0% ↑)
  TCP 64-Kbyte, core 8, MTU 9000, LRO off, tunnel off     37677.10  ->
   48778.02  ( +29.5% ↑)
  TCP 64-Kbyte, core 8, MTU 9000, LRO off, tunnel on      23960.71  ->
   41828.04  ( +74.6% ↑)
  TCP 64-Kbyte, core 8, MTU 9000, LRO on, tunnel off      57001.62  ->
   68577.28  ( +20.3% ↑)
  TCP 64-Kbyte, core 8, MTU 9000, LRO on, tunnel on       24068.93  ->
   43836.63  ( +82.1% ↑)
  TCP 64-Kbyte, core 9, MTU 1500, LRO on, tunnel off      60887.66  ->
   68647.38  ( +12.7% ↑)
  TCP 64-Kbyte, core 9, MTU 1500, LRO on, tunnel on       22463.53  ->
   34560.19  ( +53.9% ↑)
  TCP 64-Kbyte, core 9, MTU 4096, LRO on, tunnel on       23253.21  ->
   43358.30  ( +86.5% ↑)
  TCP 64-Kbyte, core 9, MTU 9000, LRO off, tunnel off     40471.13  ->
   55189.89  ( +36.4% ↑)
  TCP 64-Kbyte, core 9, MTU 9000, LRO off, tunnel on      23880.19  ->
   42457.94  ( +77.8% ↑)
  TCP 64-Kbyte, core 9, MTU 9000, LRO on, tunnel on       22040.72  ->
   30249.36  ( +37.2% ↑)

(and I learned that even when LRO is off,
mlx5e_skb_from_cqe_mpwrq_nonlinear() is being used when MTU is large,
which is why we see improvements above even when LRO is off)

(I will include the additional benchmark data in a resubmission)

The primary remaining question is how to handle the IB-case. If
get_cqe_l3_hdr_type() will be 0x0 in case of IB, I can key off of
that.

Thoughts ?


Thanks,
Christoph



>
>
> > Signed-off-by: Christoph Paasch <cpaasch@...nai.com>
> > ---
> >   drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 33 ++++++++++++++++++++++++-
> >   1 file changed, 32 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > index 2bb32082bfccdc85d26987f792eb8c1047e44dd0..2de669707623882058e3e77f82d74893e5d6fefe 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > @@ -1986,13 +1986,40 @@ mlx5e_shampo_fill_skb_data(struct sk_buff *skb, struct mlx5e_rq *rq,
> >       } while (data_bcnt);
> >   }
> >
> > +static u16
> > +mlx5e_cqe_get_min_hdr_len(const struct mlx5_cqe64 *cqe)
> > +{
> > +     u16 min_hdr_len = sizeof(struct ethhdr);
> > +     u8 l3_type = get_cqe_l3_hdr_type(cqe);
> > +     u8 l4_type = get_cqe_l4_hdr_type(cqe);
> > +
> > +     if (cqe_has_vlan(cqe))
> > +             min_hdr_len += VLAN_HLEN;
> > +
> > +     if (l3_type == CQE_L3_HDR_TYPE_IPV4)
> > +             min_hdr_len += sizeof(struct iphdr);
> > +     else if (l3_type == CQE_L3_HDR_TYPE_IPV6)
> > +             min_hdr_len += sizeof(struct ipv6hdr);
> > +
> > +     if (l4_type == CQE_L4_HDR_TYPE_UDP)
> > +             min_hdr_len += sizeof(struct udphdr);
> > +     else if (l4_type & (CQE_L4_HDR_TYPE_TCP_NO_ACK |
> > +                         CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA |
> > +                         CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA))
> > +             /* Previous condition works because we know that
> > +              * l4_type != 0x2 (CQE_L4_HDR_TYPE_UDP)
> > +              */
> > +             min_hdr_len += sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
> > +
> > +     return min_hdr_len;
> > +}
> > +
> >   static struct sk_buff *
> >   mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
> >                                  struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset,
> >                                  u32 page_idx)
>
> BTW, this function handles IPoIB as well, not only Eth.
>
> >   {
> >       struct mlx5e_frag_page *frag_page = &wi->alloc_units.frag_pages[page_idx];
> > -     u16 headlen = min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt);
> >       struct mlx5e_frag_page *head_page = frag_page;
> >       struct mlx5e_xdp_buff *mxbuf = &rq->mxbuf;
> >       u32 frag_offset    = head_offset;
> > @@ -2004,10 +2031,14 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
> >       u32 linear_frame_sz;
> >       u16 linear_data_len;
> >       u16 linear_hr;
> > +     u16 headlen;
> >       void *va;
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> >
> > +     headlen = min3(mlx5e_cqe_get_min_hdr_len(cqe), cqe_bcnt,
> > +                    (u16)MLX5E_RX_MAX_HEAD);
> > +
> >       if (prog) {
> >               /* area for bpf_xdp_[store|load]_bytes */
> >               net_prefetchw(netmem_address(frag_page->netmem) + frag_offset);
> >
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ