lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240220120821.1Tbz6IeI@linutronix.de>
Date: Tue, 20 Feb 2024 13:08:21 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>,
	bpf@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH RFC net-next 1/2] net: Reference bpf_redirect_info via
 task_struct on PREEMPT_RT.

On 2024-02-20 11:42:57 [+0100], Jesper Dangaard Brouer wrote:
> This seems low...
> Have you remembered to disable Ethernet flow-control?

No but one side says:
| i40e 0000:3d:00.1 eno2np1: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None

but I did this

>  # ethtool -A ixgbe1 rx off tx off
>  # ethtool -A i40e2 rx off tx off

and it didn't change much.

> 
> > | Summary                 8,436,294 rx/s                  0 err/s
> 
> You want to see the "extended" info via cmdline (or Ctrl+\)
> 
>  # xdp-bench drop -e eth1
> 
> 
> > 
> > with "-t 8 -b 64". I started with 2 and then increased until rx/s was
> > falling again. I have ixgbe on the sending side and i40e on the
> 
> With ixgbe on the sending side, my testlab shows I need -t 2.
> 
> With -t 2 :
> Summary                14,678,170 rx/s                  0 err/s
>   receive total        14,678,170 pkt/s        14,678,170 drop/s         0
> error/s
>     cpu:1              14,678,170 pkt/s        14,678,170 drop/s         0
> error/s
>   xdp_exception                 0 hit/s
> 
> with -t 4:
> 
> Summary                10,255,385 rx/s                  0 err/s
>   receive total        10,255,385 pkt/s        10,255,385 drop/s         0
> error/s
>     cpu:1              10,255,385 pkt/s        10,255,385 drop/s         0
> error/s
>   xdp_exception                 0 hit/s
> 
> > receiving side. I tried to receive on ixgbe but this ended with -ENOMEM
> > | # xdp-bench drop eth1
> > | Failed to attach XDP program: Cannot allocate memory
> > 
> > This is v6.8-rc5 on both sides. Let me see where this is coming from…
> > 
> 
> Another pitfall with ixgbe is that it does a full link reset when
> adding/removing XDP prog on device.  This can be annoying if connected
> back-to-back, because "remote" pktgen will stop on link reset.

so I replaced nr_cpu_ids with 64 and bootet maxcpus=64 so that I can run
xdp-bench on the ixbe.

so. i40 send, ixgbe receive.

-t 2

| Summary                 2,348,800 rx/s                  0 err/s
|   receive total         2,348,800 pkt/s         2,348,800 drop/s                0 error/s
|     cpu:0               2,348,800 pkt/s         2,348,800 drop/s                0 error/s
|   xdp_exception                 0 hit/s

-t 4
| Summary                 4,158,199 rx/s                  0 err/s
|   receive total         4,158,199 pkt/s         4,158,199 drop/s                0 error/s
|     cpu:0               4,158,199 pkt/s         4,158,199 drop/s                0 error/s
|   xdp_exception                 0 hit/s

-t 8
| Summary                 5,612,861 rx/s                  0 err/s        
|   receive total         5,612,861 pkt/s         5,612,861 drop/s                0 error/s      
|     cpu:0               5,612,861 pkt/s         5,612,861 drop/s                0 error/s      
|   xdp_exception                 0 hit/s        

going higher makes the rate drop. With 8 it floats between 5,5… 5,7…

Doing "ethtool -G eno2np1 tx 4096 rx 4096" on the i40 makes it worse,
using the default 512/512 gets the numbers from above, going below 256
makes it worse.

receiving on i40, sending on ixgbe:

-t 2
|Summary                 3,042,957 rx/s                  0 err/s
|  receive total         3,042,957 pkt/s         3,042,957 drop/s                0 error/s
|    cpu:60              3,042,957 pkt/s         3,042,957 drop/s                0 error/s
|  xdp_exception                 0 hit/s

-t 4
|Summary                 5,442,166 rx/s                  0 err/s
|  receive total         5,442,166 pkt/s         5,442,166 drop/s                0 error/s
|    cpu:60              5,442,166 pkt/s         5,442,166 drop/s                0 error/s
|  xdp_exception                 0 hit/s


-t 6
| Summary                 7,023,406 rx/s                  0 err/s
|   receive total         7,023,406 pkt/s         7,023,406 drop/s                0 error/s
|     cpu:60              7,023,406 pkt/s         7,023,406 drop/s                0 error/s
|   xdp_exception                 0 hit/s


-t 8
| Summary                 7,540,915 rx/s                  0 err/s
|   receive total         7,540,915 pkt/s         7,540,915 drop/s                0 error/s
|     cpu:60              7,540,915 pkt/s         7,540,915 drop/s                0 error/s
|   xdp_exception                 0 hit/s

-t 10
|Summary                 7,699,143 rx/s                  0 err/s
|  receive total         7,699,143 pkt/s         7,699,143 drop/s                0 error/s
|    cpu:60              7,699,143 pkt/s         7,699,143 drop/s                0 error/s
|  xdp_exception                 0 hit/s

-t 18
| Summary                 7,784,946 rx/s                  0 err/s
|   receive total         7,784,946 pkt/s         7,784,946 drop/s                0 error/s
|     cpu:60              7,784,946 pkt/s         7,784,946 drop/s                0 error/s
|   xdp_exception                 0 hit/s

after t18 it drop down to 2,…
Now I got worse than before since -t8 says 7,5… and it did 8,4 in the
morning. Do you have maybe a .config for me in case I did not enable the
performance switch?

> --Jesper

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ