[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201216090316.1c273267@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Wed, 16 Dec 2020 09:03:16 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Sven Van Asbroeck <thesven73@...il.com>
Cc: Bryan Whitehead <bryan.whitehead@...rochip.com>,
Microchip Linux Driver Support <UNGLinuxDriver@...rochip.com>,
David S Miller <davem@...emloft.net>,
Andrew Lunn <andrew@...n.ch>,
Eric Dumazet <edumazet@...gle.com>,
Heiner Kallweit <hkallweit1@...il.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net v4] lan743x: fix rx_napi_poll/interrupt ping-pong
On Tue, 15 Dec 2020 11:19:54 -0500 Sven Van Asbroeck wrote:
> From: Sven Van Asbroeck <thesven73@...il.com>
>
> Even if there is more rx data waiting on the chip, the rx napi poll fn
> will never run more than once - it will always read a few buffers, then
> bail out and re-arm interrupts. Which results in ping-pong between napi
> and interrupt.
>
> This defeats the purpose of napi, and is bad for performance.
>
> Fix by making the rx napi poll behave identically to other ethernet
> drivers:
> 1. initialize rx napi polling with an arbitrary budget (64).
> 2. in the polling fn, return full weight if rx queue is not depleted,
> this tells the napi core to "keep polling".
> 3. update the rx tail ("ring the doorbell") once for every 8 processed
> rx ring buffers.
>
> Thanks to Jakub Kicinski, Eric Dumazet and Andrew Lunn for their expert
> opinions and suggestions.
>
> Tested with 20 seconds of full bandwidth receive (iperf3):
> rx irqs softirqs(NET_RX)
> -----------------------------
> before 23827 33620
> after 129 4081
>
> Tested-by: Sven Van Asbroeck <thesven73@...il.com> # lan7430
> Fixes: 23f0703c125be ("lan743x: Add main source files for new lan743x driver")
> Signed-off-by: Sven Van Asbroeck <thesven73@...il.com>
Applied, thanks Sven.
I'll leave it out of our stable submission, and expect Sasha's
autoselection bot to pick it up. This should give us more time
for testing before the patch makes its way to stable trees.
Let's see how this idea works out for us in practice.
Powered by blists - more mailing lists