lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ada2c1b3-08b7-498f-91d5-3d5c1c88e042@lunn.ch>
Date: Thu, 1 Aug 2024 01:56:08 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Joe Damato <jdamato@...tly.com>, Shenwei Wang <shenwei.wang@....com>,
	Wei Fang <wei.fang@....com>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Clark Wang <xiaoning.wang@....com>, imx@...ts.linux.dev,
	netdev@...r.kernel.org, linux-imx@....com
Subject: Re: [PATCH v2 net-next resent] net: fec: Enable SOC specific
 rx-usecs coalescence default setting

On Tue, Jul 30, 2024 at 11:17:05AM +0100, Joe Damato wrote:
> On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> > The current FEC driver uses a single default rx-usecs coalescence setting
> > across all SoCs. This approach leads to suboptimal latency on newer, high
> > performance SoCs such as i.MX8QM and i.MX8M.
> > 
> > For example, the following are the ping result on a i.MX8QXP board:
> > 
> > $ ping 192.168.0.195
> > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
> > 
> > The current default rx-usecs value of 1000us was originally optimized for
> > CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> > generations, CPU performance is no longer a limiting factor. Consequently,
> > the rx-usecs value should be reduced to enhance receive latency.
> > 
> > The following are the ping result with the 100us setting:
> > 
> > $ ping 192.168.0.195
> > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
> > 
> > Performance testing using iperf revealed no noticeable impact on
> > network throughput or CPU utilization.
> 
> I'm not sure this short paragraph addresses Andrew's comment:
> 
>   Have you benchmarked CPU usage with this patch, for a range of traffic
>   bandwidths and burst patterns. How does it differ?
> 
> Maybe you could provide more details of the iperf tests you ran? It
> seems odd that CPU usage is unchanged.
> 
> If the system is more reactive (due to lower coalesce settings and
> IRQs firing more often), you'd expect CPU usage to increase,
> wouldn't you?

Hi Joe

It is not as simple as that.

Consider a VoIP system, a CISCO or Snom phone. It will be receiving a
packet about every 2ms. This change in interrupt coalescing will have
no effect on CPU load, there will still be an interrupt per
packet. What this change does however do is reduce the latency, as can
be seen by the ping. However, anybody building a phone knows about

ethtool -C|--coalesce

and will either configure the value lower, or turn it off
altogether. Also, CCITT recommends 50ms end to end delay for a
national call, so going from 1.5 to 0.4ms is in the noise.

Now consider bulk transfer at line rate. The receive buffer is going
to fill with multiple packets, NAPI is going to get its budget of 64
packets, and the interrupt will be left disabled. NAPI will then poll
the device every so often, receiving packets. Since interrupts are
off, the coalesce time makes no difference.

Now consider packets arriving at about 0.5ms intervals. That is way
too slow for NAPI to go into polled mode. It does however mean 2
packets would typically be received in each coalescence
period. However with the proposed change, an interrupt would be
triggered for each packet, doubling the interrupt load.

But think about a packet every 0.5ms. That is 2000 packets per
second. Even the older CPUs should be able to handle that.

What i would really like to know is the real use case this change is
for. For my, ping is not a use case.

     Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ