[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5a71Egu9xwjwvx5RLmPa_j7EvU8ztRgTsLNxH2xFV6yw@mail.gmail.com>
Date: Wed, 5 Feb 2025 21:57:42 -0800
From: Dave Taht <dave.taht@...il.com>
To: Samiullah Khawaja <skhawaja@...gle.com>
Cc: Jakub Kicinski <kuba@...nel.org>, "David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, almasrymina@...gle.com,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next v3 0/4] Add support to do threaded napi busy poll
On Wed, Feb 5, 2025 at 9:49 PM Samiullah Khawaja <skhawaja@...gle.com> wrote:
>
> On Wed, Feb 5, 2025 at 9:36 PM Dave Taht <dave.taht@...il.com> wrote:
> >
> > I have often wondered the effects of reducing napi poll weight from 64
> > to 16 or less.
> Yes, that is Interesting. I think higher weight would allow it to
> fetch more descriptors doing more batching but then packets are pushed
> up the stack late. A lower value would push packet up the stack
> quicker, but then if the core is being shared with the application
> processing thread then the descriptors will spend more time in the NIC
> queue.
My take has been that a very low weight would keep far more data in L1
before being processed elsewhere. Modern interrupt response times on
arm gear seem a bit lower than x86_64 but still pretty horrible.
It´s really not related to your patch but I would rather love to see
cache hits/misses vs a vs this benchmark (with/without a lower weight)
> >
> > Also your test shows an increase in max latency...
> >
> > latency_max=0.200182942
> I noticed this anomaly and my guess is that it is a packet drop and
> this is basically a retransmit timeout. Going through tcpdumps to
> confirm.
sometimes enabling ecn helps as a debugging tool.
--
Dave Täht CSO, LibreQos
Powered by blists - more mailing lists