[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zv8o4eliTO60odQe@mini-arch>
Date: Thu, 3 Oct 2024 16:29:37 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Joe Damato <jdamato@...tly.com>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, skhawaja@...gle.com,
sdf@...ichev.me, bjorn@...osinc.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, willemdebruijn.kernel@...il.com,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Breno Leitao <leitao@...ian.org>,
Daniel Jurgens <danielj@...dia.com>,
David Ahern <dsahern@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Donald Hunter <donald.hunter@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Jiri Pirko <jiri@...nulli.us>,
Johannes Berg <johannes.berg@...el.com>,
Jonathan Corbet <corbet@....net>,
Kory Maincent <kory.maincent@...tlin.com>,
Leon Romanovsky <leon@...nel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:MELLANOX MLX4 core VPI driver" <linux-rdma@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Michael Chan <michael.chan@...adcom.com>,
Mina Almasry <almasrymina@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Saeed Mahameed <saeedm@...dia.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Tariq Toukan <tariqt@...dia.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Subject: Re: [RFC net-next v4 0/9] Add support for per-NAPI config via netlink
On 10/01, Joe Damato wrote:
> Greetings:
>
> Welcome to RFC v4.
>
> Very important and significant changes have been made since RFC v3 [1],
> please see the changelog below for details.
>
> A couple important call outs for this revision for reviewers:
>
> 1. idpf embeds a napi_struct in an internal data structure and
> includes an assertion on the size of napi_struct. The maintainers
> have stated that they think anyone touching napi_struct should update
> the assertion [2], so I've done this in patch 3.
>
> Even though the assertion has been updated, I've given the
> cacheline placement of napi_struct within idpf's internals no
> thought or consideration.
>
> Would appreciate other opinions on this; I think idpf should be
> fixed. It seems unreasonable to me that anyone changing the size of
> a struct in the core should need to think about cachelines in idpf.
[..]
> 2. This revision seems to work (see below for a full walk through). Is
> this the behavior we want? Am I missing some use case or some
> behavioral thing other folks need?
The walk through looks good!
> 3. Re a previous point made by Stanislav regarding "taking over a NAPI
> ID" when the channel count changes: mlx5 seems to call napi_disable
> followed by netif_napi_del for the old queues and then calls
> napi_enable for the new ones. In this RFC, the NAPI ID generation
> is deferred to napi_enable. This means we won't end up with two of
> the same NAPI IDs added to the hash at the same time (I am pretty
> sure).
[..]
> Can we assume all drivers will napi_disable the old queues before
> napi_enable the new ones? If yes, we might not need to worry about
> a NAPI ID takeover function.
With the explicit driver opt-in via netif_napi_add_config, this
shouldn't matter? When somebody gets to converting the drivers that
don't follow this common pattern they'll have to solve the takeover
part :-)
Powered by blists - more mailing lists