[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zqi9oRGbTGDUfjhi@LQ3V64L9R2>
Date: Tue, 30 Jul 2024 11:17:05 +0100
From: Joe Damato <jdamato@...tly.com>
To: Shenwei Wang <shenwei.wang@....com>
Cc: Wei Fang <wei.fang@....com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Clark Wang <xiaoning.wang@....com>, imx@...ts.linux.dev,
netdev@...r.kernel.org, linux-imx@....com
Subject: Re: [PATCH v2 net-next resent] net: fec: Enable SOC specific
rx-usecs coalescence default setting
On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> The current FEC driver uses a single default rx-usecs coalescence setting
> across all SoCs. This approach leads to suboptimal latency on newer, high
> performance SoCs such as i.MX8QM and i.MX8M.
>
> For example, the following are the ping result on a i.MX8QXP board:
>
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
>
> The current default rx-usecs value of 1000us was originally optimized for
> CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> generations, CPU performance is no longer a limiting factor. Consequently,
> the rx-usecs value should be reduced to enhance receive latency.
>
> The following are the ping result with the 100us setting:
>
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
>
> Performance testing using iperf revealed no noticeable impact on
> network throughput or CPU utilization.
I'm not sure this short paragraph addresses Andrew's comment:
Have you benchmarked CPU usage with this patch, for a range of traffic
bandwidths and burst patterns. How does it differ?
Maybe you could provide more details of the iperf tests you ran? It
seems odd that CPU usage is unchanged.
If the system is more reactive (due to lower coalesce settings and
IRQs firing more often), you'd expect CPU usage to increase,
wouldn't you?
- Joe
Powered by blists - more mailing lists