lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <AM5PR04MB3139A4FDA4DB6FB5A1DB1C7E8830A@AM5PR04MB3139.eurprd04.prod.outlook.com>
Date: Mon, 10 Jul 2023 06:16:45 +0000
From: Wei Fang <wei.fang@....com>
To: Andrew Lunn <andrew@...n.ch>
CC: "davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
	<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
	"pabeni@...hat.com" <pabeni@...hat.com>, "ast@...nel.org" <ast@...nel.org>,
	"daniel@...earbox.net" <daniel@...earbox.net>, "hawk@...nel.org"
	<hawk@...nel.org>, "john.fastabend@...il.com" <john.fastabend@...il.com>,
	Shenwei Wang <shenwei.wang@....com>, Clark Wang <xiaoning.wang@....com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>, dl-linux-imx
	<linux-imx@....com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>
Subject: RE: [PATCH net 3/3] net: fec: increase the size of tx ring and update
 thresholds of tx ring

> -----Original Message-----
> From: Andrew Lunn <andrew@...n.ch>
> Sent: 2023年7月9日 3:04
> To: Wei Fang <wei.fang@....com>
> Cc: davem@...emloft.net; edumazet@...gle.com; kuba@...nel.org;
> pabeni@...hat.com; ast@...nel.org; daniel@...earbox.net;
> hawk@...nel.org; john.fastabend@...il.com; Shenwei Wang
> <shenwei.wang@....com>; Clark Wang <xiaoning.wang@....com>;
> netdev@...r.kernel.org; dl-linux-imx <linux-imx@....com>;
> linux-kernel@...r.kernel.org; bpf@...r.kernel.org
> Subject: Re: [PATCH net 3/3] net: fec: increase the size of tx ring and update
> thresholds of tx ring
> 
> > > How does this affect platforms like Vybrid with its fast Ethernet?
> > Sorry, I don't have the Vybrid platform, but I think I don't think it
> > has much impact, at most it just takes up some more memory.
> 
> It has been 6 months since the page pool patches were posted and i asked
> about benchmark results for other platforms like Vybrid. Is it so hard to get
> hold or reference platforms? Fugang Duan used to have a test farm of all sorts
> of boards and reported to me the regressions i introduced with MDIO changes
> and PM changes. As somebody who seems to be an NXP FEC Maintainer i
> would expect you to have access to a range of hardware. Especially since XDP
> and eBPF is a bit of a niche for embedded processes which NXP produce. You
> want to be sure your changes don't regress the main use cases which i guess
> are plain networking.
> 
Sorry, the Vybrid platform is not currently maintained by us, and Vybrid is also not
included in our usual Yocto SDK RC version. Even I find a Vybrid board, I think it
probably cann't run with the latest kernel image, because the latest kernel image
does not match with the old Yocto SDK, and new Yocto SDK does not support Vybrid
platform. I also asked my colleague in test team who is in charge of ethernet testing,
she hadn't even heard of Vybrid platform.

> > > Does the burst latency go up?
> > No, for fec, when a packet is attached to the BDs, the software will
> > immediately trigger the hardware to send the packet. In addition, I
> > think it may improve the latency, because the size of the tx ring
> > becomes larger, and more packets can be attached to the BD ring for burst
> traffic.
> 
> And a bigger burst means more latency. Read about Buffer bloat.
> 
Perhaps, but not necessarily. For example, in some cases that some burst packets
maybe stay in Qdisc instead of hardware queue if ring size is small.

> While you have iperf running saturating the link, try a ping as well. How does
> ping latency change with more TX buffers?
> 
Per your suggestions, I tested on i.MX8ULP, i.MX8MM and i.MX93 platforms. The
results do not appear to be very different.
i.MX93 with ring size 1024:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.114 -c 16
PING 10.193.102.114 (10.193.102.114) 56(84) bytes of data.
64 bytes from 10.193.102.114: icmp_seq=1 ttl=63 time=3.78 ms
64 bytes from 10.193.102.114: icmp_seq=2 ttl=63 time=2.16 ms
64 bytes from 10.193.102.114: icmp_seq=3 ttl=63 time=3.31 ms
64 bytes from 10.193.102.114: icmp_seq=4 ttl=63 time=2.11 ms
64 bytes from 10.193.102.114: icmp_seq=5 ttl=63 time=3.43 ms
64 bytes from 10.193.102.114: icmp_seq=6 ttl=63 time=3.20 ms
64 bytes from 10.193.102.114: icmp_seq=7 ttl=63 time=3.20 ms
64 bytes from 10.193.102.114: icmp_seq=8 ttl=63 time=3.75 ms
64 bytes from 10.193.102.114: icmp_seq=9 ttl=63 time=3.21 ms
64 bytes from 10.193.102.114: icmp_seq=10 ttl=63 time=3.76 ms
64 bytes from 10.193.102.114: icmp_seq=11 ttl=63 time=2.16 ms
64 bytes from 10.193.102.114: icmp_seq=12 ttl=63 time=2.67 ms
64 bytes from 10.193.102.114: icmp_seq=13 ttl=63 time=3.59 ms
64 bytes from 10.193.102.114: icmp_seq=14 ttl=63 time=2.55 ms
64 bytes from 10.193.102.114: icmp_seq=15 ttl=63 time=2.54 ms
64 bytes from 10.193.102.114: icmp_seq=16 ttl=63 time=3.88 ms

--- 10.193.102.114 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15027ms
rtt min/avg/max/mdev = 2.112/3.082/3.875/0.606 ms

i.MX93 with ring size 512:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.114 -c 16
PING 10.193.102.114 (10.193.102.114) 56(84) bytes of data.
64 bytes from 10.193.102.114: icmp_seq=1 ttl=63 time=2.74 ms
64 bytes from 10.193.102.114: icmp_seq=2 ttl=63 time=3.32 ms
64 bytes from 10.193.102.114: icmp_seq=3 ttl=63 time=2.72 ms
64 bytes from 10.193.102.114: icmp_seq=4 ttl=63 time=3.36 ms
64 bytes from 10.193.102.114: icmp_seq=5 ttl=63 time=3.41 ms
64 bytes from 10.193.102.114: icmp_seq=6 ttl=63 time=2.67 ms
64 bytes from 10.193.102.114: icmp_seq=7 ttl=63 time=2.77 ms
64 bytes from 10.193.102.114: icmp_seq=8 ttl=63 time=3.38 ms
64 bytes from 10.193.102.114: icmp_seq=9 ttl=63 time=2.54 ms
64 bytes from 10.193.102.114: icmp_seq=10 ttl=63 time=3.36 ms
64 bytes from 10.193.102.114: icmp_seq=11 ttl=63 time=3.44 ms
64 bytes from 10.193.102.114: icmp_seq=12 ttl=63 time=2.80 ms
64 bytes from 10.193.102.114: icmp_seq=13 ttl=63 time=2.86 ms
64 bytes from 10.193.102.114: icmp_seq=14 ttl=63 time=3.90 ms
64 bytes from 10.193.102.114: icmp_seq=15 ttl=63 time=2.50 ms
64 bytes from 10.193.102.114: icmp_seq=16 ttl=63 time=2.89 ms

--- 10.193.102.114 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15028ms
rtt min/avg/max/mdev = 2.496/3.040/3.898/0.394 ms

i.MX8MM with ring size 1024:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.126 -c 16
PING 10.193.102.126 (10.193.102.126) 56(84) bytes of data.
64 bytes from 10.193.102.126: icmp_seq=1 ttl=127 time=1.34 ms
64 bytes from 10.193.102.126: icmp_seq=2 ttl=127 time=2.07 ms
64 bytes from 10.193.102.126: icmp_seq=3 ttl=127 time=2.40 ms
64 bytes from 10.193.102.126: icmp_seq=4 ttl=127 time=1.48 ms
64 bytes from 10.193.102.126: icmp_seq=5 ttl=127 time=1.69 ms
64 bytes from 10.193.102.126: icmp_seq=6 ttl=127 time=1.54 ms
64 bytes from 10.193.102.126: icmp_seq=7 ttl=127 time=2.30 ms
64 bytes from 10.193.102.126: icmp_seq=8 ttl=127 time=1.94 ms
64 bytes from 10.193.102.126: icmp_seq=9 ttl=127 time=4.25 ms
64 bytes from 10.193.102.126: icmp_seq=10 ttl=127 time=1.75 ms
64 bytes from 10.193.102.126: icmp_seq=11 ttl=127 time=1.25 ms
64 bytes from 10.193.102.126: icmp_seq=12 ttl=127 time=2.04 ms
64 bytes from 10.193.102.126: icmp_seq=13 ttl=127 time=2.31 ms
64 bytes from 10.193.102.126: icmp_seq=14 ttl=127 time=2.18 ms
64 bytes from 10.193.102.126: icmp_seq=15 ttl=127 time=2.25 ms
64 bytes from 10.193.102.126: icmp_seq=16 ttl=127 time=1.37 ms

--- 10.193.102.126 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15026ms
rtt min/avg/max/mdev = 1.248/2.011/4.250/0.686 ms

i.MX8MM with ring size 512:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.126 -c 16
PING 10.193.102.126 (10.193.102.126) 56(84) bytes of data.
64 bytes from 10.193.102.126: icmp_seq=1 ttl=63 time=4.82 ms
64 bytes from 10.193.102.126: icmp_seq=2 ttl=63 time=4.67 ms
64 bytes from 10.193.102.126: icmp_seq=3 ttl=63 time=3.74 ms
64 bytes from 10.193.102.126: icmp_seq=4 ttl=63 time=3.87 ms
64 bytes from 10.193.102.126: icmp_seq=5 ttl=63 time=3.30 ms
64 bytes from 10.193.102.126: icmp_seq=6 ttl=63 time=3.79 ms
64 bytes from 10.193.102.126: icmp_seq=7 ttl=127 time=2.12 ms
64 bytes from 10.193.102.126: icmp_seq=8 ttl=127 time=1.99 ms
64 bytes from 10.193.102.126: icmp_seq=9 ttl=127 time=2.15 ms
64 bytes from 10.193.102.126: icmp_seq=10 ttl=127 time=1.82 ms
64 bytes from 10.193.102.126: icmp_seq=11 ttl=127 time=1.92 ms
64 bytes from 10.193.102.126: icmp_seq=12 ttl=127 time=1.23 ms
64 bytes from 10.193.102.126: icmp_seq=13 ttl=127 time=2.00 ms
64 bytes from 10.193.102.126: icmp_seq=14 ttl=127 time=1.66 ms
64 bytes from 10.193.102.126: icmp_seq=15 ttl=127 time=1.75 ms
64 bytes from 10.193.102.126: icmp_seq=16 ttl=127 time=2.24 ms

--- 10.193.102.126 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15026ms
rtt min/avg/max/mdev = 1.226/2.691/4.820/1.111 ms

i.MX8ULP with ring size 1024:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.216 -c 16
PING 10.193.102.216 (10.193.102.216) 56(84) bytes of data.
64 bytes from 10.193.102.216: icmp_seq=1 ttl=63 time=3.40 ms
64 bytes from 10.193.102.216: icmp_seq=2 ttl=63 time=5.46 ms
64 bytes from 10.193.102.216: icmp_seq=3 ttl=63 time=5.55 ms
64 bytes from 10.193.102.216: icmp_seq=4 ttl=63 time=5.97 ms
64 bytes from 10.193.102.216: icmp_seq=5 ttl=63 time=6.26 ms
64 bytes from 10.193.102.216: icmp_seq=6 ttl=63 time=0.963 ms
64 bytes from 10.193.102.216: icmp_seq=7 ttl=63 time=4.10 ms
64 bytes from 10.193.102.216: icmp_seq=8 ttl=63 time=4.55 ms
64 bytes from 10.193.102.216: icmp_seq=9 ttl=63 time=1.24 ms
64 bytes from 10.193.102.216: icmp_seq=10 ttl=63 time=6.96 ms
64 bytes from 10.193.102.216: icmp_seq=11 ttl=63 time=3.27 ms
64 bytes from 10.193.102.216: icmp_seq=12 ttl=63 time=6.57 ms
64 bytes from 10.193.102.216: icmp_seq=13 ttl=63 time=2.99 ms
64 bytes from 10.193.102.216: icmp_seq=14 ttl=63 time=1.70 ms
64 bytes from 10.193.102.216: icmp_seq=15 ttl=63 time=1.79 ms
64 bytes from 10.193.102.216: icmp_seq=16 ttl=63 time=1.42 ms

--- 10.193.102.216 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15026ms
rtt min/avg/max/mdev = 0.963/3.886/6.955/2.009 ms

i.MX8ULP with ring size 512:
fangwei@...gwei-OptiPlex-7050:~$ ping 10.193.102.216 -c 16
PING 10.193.102.216 (10.193.102.216) 56(84) bytes of data.
64 bytes from 10.193.102.216: icmp_seq=1 ttl=63 time=5.70 ms
64 bytes from 10.193.102.216: icmp_seq=2 ttl=63 time=5.89 ms
64 bytes from 10.193.102.216: icmp_seq=3 ttl=63 time=3.37 ms
64 bytes from 10.193.102.216: icmp_seq=4 ttl=63 time=5.07 ms
64 bytes from 10.193.102.216: icmp_seq=5 ttl=63 time=1.47 ms
64 bytes from 10.193.102.216: icmp_seq=6 ttl=63 time=3.45 ms
64 bytes from 10.193.102.216: icmp_seq=7 ttl=63 time=1.35 ms
64 bytes from 10.193.102.216: icmp_seq=8 ttl=63 time=6.62 ms
64 bytes from 10.193.102.216: icmp_seq=9 ttl=63 time=1.41 ms
64 bytes from 10.193.102.216: icmp_seq=10 ttl=63 time=6.43 ms
64 bytes from 10.193.102.216: icmp_seq=11 ttl=63 time=1.41 ms
64 bytes from 10.193.102.216: icmp_seq=12 ttl=63 time=6.75 ms
64 bytes from 10.193.102.216: icmp_seq=13 ttl=63 time=4.76 ms
64 bytes from 10.193.102.216: icmp_seq=14 ttl=63 time=3.85 ms
64 bytes from 10.193.102.216: icmp_seq=15 ttl=63 time=3.50 ms
64 bytes from 10.193.102.216: icmp_seq=16 ttl=63 time=1.37 ms

--- 10.193.102.216 ping statistics ---
16 packets transmitted, 16 received, 0% packet loss, time 15027ms
rtt min/avg/max/mdev = 1.349/3.900/6.749/1.985 ms

> Ideally you want enough transmit buffers to keep the link full, but not more. If
> the driver is using BQL, the network stack will help with this.
> 
> > Below are the results on i.MX6UL/8MM/8MP/8ULP/93 platforms, i.MX6UL
> > and 8ULP only support Fast ethernet. Others support 1G.
> 
> Thanks for the benchmark numbers. Please get into the habit of including
> them. We like to see justification for any sort of performance tweaks.
> 
> 	Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ