[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240903124008.4793c087@kernel.org>
Date: Tue, 3 Sep 2024 12:40:08 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Samiullah Khawaja <skhawaja@...gle.com>
Cc: Joe Damato <jdamato@...tly.com>, netdev@...r.kernel.org,
edumazet@...gle.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, sdf@...ichev.me, bjorn@...osinc.com,
hch@...radead.org, willy@...radead.org, willemdebruijn.kernel@...il.com,
Martin Karsten <mkarsten@...terloo.ca>, Donald Hunter
<donald.hunter@...il.com>, "David S. Miller" <davem@...emloft.net>, Paolo
Abeni <pabeni@...hat.com>, Jesper Dangaard Brouer <hawk@...nel.org>, Xuan
Zhuo <xuanzhuo@...ux.alibaba.com>, Daniel Jurgens <danielj@...dia.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 5/5] netdev-genl: Support setting per-NAPI
config values
On Tue, 3 Sep 2024 12:04:52 -0700 Samiullah Khawaja wrote:
> Do we need a queue to napi association to set/persist napi
> configurations?
I'm afraid zero-copy schemes will make multiple queues per NAPI more
and more common, so pretending the NAPI params (related to polling)
are pre queue will soon become highly problematic.
> Can a new index param be added to the netif_napi_add
> and persist the configurations in napi_storage.
That'd be my (weak) preference.
> I guess the problem would be the size of napi_storage.
I don't think so, we're talking about 16B per NAPI,
struct netdev_queue is 320B, struct netdev_rx_queue is 192B.
NAPI storage is rounding error next to those :S
> Also wondering if for some use case persistence would be problematic
> when the napis are recreated, since the new napi instances might not
> represent the same context? For example If I resize the dev from 16
> rx/tx to 8 rx/tx queues and the napi index that was used by TX queue,
> now polls RX queue.
We can clear the config when NAPI is activated (ethtool -L /
set-channels). That seems like a good idea.
The distinction between Rx and Tx NAPIs is a bit more tricky, tho.
When^w If we can dynamically create Rx queues one day, a NAPI may
start out as a Tx NAPI and become a combined one when Rx queue is
added to it.
Maybe it's enough to document how rings are distributed to NAPIs?
First set of NAPIs should get allocated to the combined channels,
then for remaining rx- and tx-only NAPIs they should be interleaved
starting with rx?
Example, asymmetric config: combined + some extra tx:
combined tx
[0..#combined-1] [#combined..#combined+#tx-1]
Split rx / tx - interleave:
[0 rx0] [1 tx0] [2 rx1] [3 tx1] [4 rx2] [5 tx2] ...
This would limit the churn when changing channel counts.
Powered by blists - more mailing lists