[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250206195035.12d64d8a@pumpkin>
Date: Thu, 6 Feb 2025 19:50:35 +0000
From: David Laight <david.laight.linux@...il.com>
To: Dave Taht <dave.taht@...il.com>
Cc: Samiullah Khawaja <skhawaja@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, "David S . Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
almasrymina@...gle.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next v3 0/4] Add support to do threaded napi busy
poll
On Wed, 5 Feb 2025 21:36:31 -0800
Dave Taht <dave.taht@...il.com> wrote:
> I have often wondered the effects of reducing napi poll weight from 64
> to 16 or less.
Doesn't that just move the loop from inside 'NAPI' to the softint scheduler.
Which could easily slow things down because of L1 I-cache pressure.
IIRC what happens next the the softint scheduler decides it has run too
many functions on the current process stack and decides to defer further
calls to a thread context.
Since the thread runs at a normal user priority hardware receive rings
then overflow (discarding packets) because the receive code has suddenly
gone from being very high priority (higher than any RT process) to really
quite low.
On a system doing real work all the ethernet receive code needs to run
at a reasonably high priority (eg a low/bad FIFO one) in order to avoid
packet loss for anything needing high rx packet rates.
David
Powered by blists - more mailing lists