lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6521b6ed-ed67-469e-9cc8-b08c489cba10@lunn.ch>
Date: Mon, 15 Jul 2024 22:47:17 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Shenwei Wang <shenwei.wang@....com>
Cc: Wei Fang <wei.fang@....com>, "David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Clark Wang <xiaoning.wang@....com>,
	Rasmus Villemoes <linux@...musvillemoes.dk>, imx@...ts.linux.dev,
	netdev@...r.kernel.org, linux-imx@....com
Subject: Re: [PATCH net-next] net: fec: Enable SOC specific rx-usecs
 coalescence default setting

On Mon, Jul 15, 2024 at 02:54:49PM -0500, Shenwei Wang wrote:
> The current FEC driver uses a single default rx-usecs coalescence setting
> across all SoCs. This approach leads to suboptimal latency on newer, high
> performance SoCs such as i.MX8QM and i.MX8M.

Does ethtool -C work on these devices?

Interrupt coalescence is more than latency. It is also about CPU load.

Does NAPI polling work correctly on these devices? If so, interrupts
should be disabled quite a bit, and then interrupt latency does not
matter so much.

Have you benchmarked CPU usage with this patch, for a range of traffic
bandwidths and burst patterns. How does it differ?

> For example, the following are the ping result on a i.MX8QXP board:
> 
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
> 
> The current default rx-usecs value of 1000us was originally optimized for
> CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> generations, CPU performance is no longer a limiting factor. Consequently,
> the rx-usecs value should be reduced to enhance receive latency.
> 
> The following are the ping result with the 100us setting:
> 
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
> 
> Fixes: df727d4547de ("net: fec: don't reset irq coalesce settings to defaults on "ip link up"")

Fixes is not correct here. It was never broken. This is maybe an
optimisation, maybe a deoptimisation, depending on your use case.

And next-next is closed at the moment anyway.

	Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ