[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33b06fd6-3eb3-4eb7-8091-7ebe8a8373ba@bootlin.com>
Date: Mon, 19 Jan 2026 16:03:59 +0100
From: Maxime Chevallier <maxime.chevallier@...tlin.com>
To: "Russell King (Oracle)" <rmk+kernel@...linux.org.uk>,
Andrew Lunn <andrew@...n.ch>, Heiner Kallweit <hkallweit1@...il.com>
Cc: Alexandre Torgue <alexandre.torgue@...s.st.com>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, linux-arm-kernel@...ts.infradead.org,
linux-stm32@...md-mailman.stormreply.com,
Maxime Coquelin <mcoquelin.stm32@...il.com>, netdev@...r.kernel.org,
Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH RFC net-next] net: stmmac: enable RPS and RBU interrupts
Hi Russell,
On 17/01/2026 00:25, Russell King (Oracle) wrote:
> Enable receive process stopped and receive buffer unavailable
> interrupts, so that the statistic counters can be updated.
>
> Signed-off-by: Russell King (Oracle) <rmk+kernel@...linux.org.uk>
> ---
>
> Maxime,
>
> You may find this patch useful, as it makes the "rx_buf_unav_irq"
> and "rx_process_stopped_irq" ethtool statistic counters functional.
> This means that the lack of receive descriptors can still be detected
> even if the receive side doesn't actually stall.
>
> I'm not sure why we publish these statistic counters if we don't
> enable the interrupts to allow them to ever be non-zero.
It works, I can indeed see the stats get properly updated on imx8mp :)
There's one downside to it though, which is that as soon as we hit a situation
where we don't have RX bufs available, this patchs has a tendancy to make things
worse as we'll trigger interrupts for each packet we receive and that we can't
process, making it even longer for queues to be refilled.
It shows on iperf3 with small packets :
---- Before patch, 17% packet loss on UDP 56 bytes packets -----------------
# iperf3 -u -b 0 -l 56 -c 192.168.2.1 -R
Connecting to host 192.168.2.1, port 5201
Reverse mode, remote host 192.168.2.1 is sending
[ 5] local 192.168.2.18 port 47851 connected to 192.168.2.1 port 5201
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 10.7 MBytes 90.0 Mbits/sec 0.003 ms 48550/249650 (19%)
[ 5] 1.00-2.00 sec 11.3 MBytes 95.0 Mbits/sec 0.003 ms 41881/253832 (16%)
[ 5] 2.00-3.00 sec 11.3 MBytes 94.9 Mbits/sec 0.002 ms 42060/253913 (17%)
[ 5] 3.00-4.00 sec 11.3 MBytes 95.1 Mbits/sec 0.003 ms 41499/253785 (16%)
[ 5] 4.00-5.00 sec 11.3 MBytes 94.6 Mbits/sec 0.003 ms 42663/253787 (17%)
[ 5] 5.00-6.00 sec 11.3 MBytes 94.9 Mbits/sec 0.006 ms 41976/253719 (17%)
[ 5] 6.00-7.00 sec 11.3 MBytes 94.5 Mbits/sec 0.003 ms 43133/253999 (17%)
[ 5] 7.00-8.00 sec 11.3 MBytes 95.0 Mbits/sec 0.004 ms 41442/253579 (16%)
[ 5] 8.00-9.00 sec 11.4 MBytes 95.2 Mbits/sec 0.004 ms 41518/254131 (16%)
[ 5] 9.00-10.00 sec 11.2 MBytes 94.3 Mbits/sec 0.006 ms 43580/254143 (17%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 135 MBytes 114 Mbits/sec 0.000 ms 0/0 (0%) sender
[ 5] 0.00-10.00 sec 112 MBytes 94.3 Mbits/sec 0.006 ms 428302/2534538 (17%) receiver
iperf Done.
# ethtool -S eth1 | grep rx_buf_unav_irq
rx_buf_unav_irq: 0
---- After patch, 22% packet loss on UDP 56 bytes packets ----------------------
# iperf3 -u -b 0 -l 56 -c 192.168.2.1 -R
Connecting to host 192.168.2.1, port 5201
Reverse mode, remote host 192.168.2.1 is sending
[ 5] local 192.168.2.18 port 42121 connected to 192.168.2.1 port 5201
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 10.3 MBytes 85.8 Mbits/sec 0.004 ms 55146/247172 (22%)
[ 5] 1.00-2.00 sec 10.6 MBytes 89.1 Mbits/sec 0.003 ms 54699/253355 (22%)
[ 5] 2.00-3.00 sec 10.6 MBytes 89.0 Mbits/sec 0.003 ms 55231/253887 (22%)
[ 5] 3.00-4.00 sec 10.6 MBytes 88.9 Mbits/sec 0.003 ms 55138/253602 (22%)
[ 5] 4.00-5.00 sec 10.6 MBytes 89.0 Mbits/sec 0.003 ms 54938/253722 (22%)
[ 5] 5.00-6.00 sec 10.6 MBytes 88.9 Mbits/sec 0.003 ms 55273/253580 (22%)
[ 5] 6.00-7.00 sec 10.6 MBytes 89.0 Mbits/sec 0.003 ms 55202/253986 (22%)
[ 5] 7.00-8.00 sec 10.6 MBytes 89.1 Mbits/sec 0.003 ms 55047/253958 (22%)
[ 5] 8.00-9.00 sec 10.6 MBytes 88.9 Mbits/sec 0.003 ms 55612/254140 (22%)
[ 5] 9.00-10.00 sec 10.6 MBytes 89.0 Mbits/sec 0.003 ms 55683/254403 (22%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 135 MBytes 113 Mbits/sec 0.000 ms 0/0 (0%) sender
[ 5] 0.00-10.00 sec 106 MBytes 88.7 Mbits/sec 0.003 ms 551969/2531805 (22%) receiver
iperf Done.
# ethtool -S eth1 | grep rx_buf_unav_irq
rx_buf_unav_irq: 30624
So clearly there are pros and cons with this, but I don't want to fall into the
"let's not break microbenchmarks" pitfall.
I personnaly find the stat useful, so :
Tested-by: Maxime Chevallier <maxime.chevallier@...tlin.com>
Reviewed-by: Maxime Chevallier <maxime.chevallier@...tlin.com>
Maxime
Powered by blists - more mailing lists