[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ztl6lATqzndc2-hK@LQ3V64L9R2>
Date: Thu, 5 Sep 2024 11:32:04 +0200
From: Joe Damato <jdamato@...tly.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Stanislav Fomichev <sdf@...ichev.me>, netdev@...r.kernel.org,
edumazet@...gle.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, bjorn@...osinc.com, hch@...radead.org,
willy@...radead.org, willemdebruijn.kernel@...il.com,
skhawaja@...gle.com, Martin Karsten <mkarsten@...terloo.ca>,
Donald Hunter <donald.hunter@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Daniel Jurgens <danielj@...dia.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 5/5] netdev-genl: Support setting per-NAPI
config values
On Wed, Sep 04, 2024 at 04:54:17PM -0700, Jakub Kicinski wrote:
> On Wed, 4 Sep 2024 16:40:41 -0700 Stanislav Fomichev wrote:
> > > I think what you are proposing seems fine; I'm just working out the
> > > implementation details and making sure I understand before sending
> > > another revision.
> >
> > What if instead of an extra storage index in UAPI, we make napi_id persistent?
> > Then we can keep using napi_id as a user-facing number for the configuration.
> >
> > Having a stable napi_id would also be super useful for the epoll setup so you
> > don't have to match old/invalid ids to the new ones on device reset.
>
> that'd be nice, initially I thought that we have some drivers that have
> multiple instances of NAPI enabled for a single "index", but I don't
> see such drivers now.
>
> > In the code, we can keep the same idea with napi_storage in netdev and
> > ask drivers to provide storage id, but keep that id internal.
> >
> > The only complication with that is napi_hash_add/napi_hash_del that
> > happen in netif_napi_add_weight. So for the devices that allocate
> > new napi before removing the old ones (most devices?), we'd have to add
> > some new netif_napi_takeover(old_napi, new_napi) to remove the
> > old napi_id from the hash and reuse it in the new one.
> >
> > So for mlx5, the flow would look like the following:
> >
> > - mlx5e_safe_switch_params
> > - mlx5e_open_channels
> > - netif_napi_add(new_napi)
> > - adds napi with 'ephemeral' napi id
> > - mlx5e_switch_priv_channels
> > - mlx5e_deactivate_priv_channels
> > - napi_disable(old_napi)
> > - netif_napi_del(old_napi) - this frees the old napi_id
> > - mlx5e_activate_priv_channels
> > - mlx5e_activate_channels
> > - mlx5e_activate_channel
> > - netif_napi_takeover(old_napi is gone, so probably take id from napi_storage?)
> > - if napi is not hashed - safe to reuse?
> > - napi_enable
> >
> > This is a bit ugly because we still have random napi ids during reset, but
> > is not super complicated implementation-wise. We can eventually improve
> > the above by splitting netif_napi_add_weight into two steps: allocate and
> > activate (to do the napi_id allocation & hashing). Thoughts?
>
> The "takeover" would be problematic for drivers which free old NAPI
> before allocating new one (bnxt?). But splitting the two steps sounds
> pretty clean. We can add a helper to mark NAPI as "driver will
> explicitly list/hash later", and have the driver call a new helper
> which takes storage ID and lists the NAPI in the hash.
Hm... I thought I had an idea of how to write this up, but I think
maybe I've been thinking about it wrong.
Whatever I land on, I'll send first as an RFC to make sure I'm
following all the feedback that has come in. I definitely want to
get this right.
Sorry for the slow responses; I am technically on PTO for a bit
before LPC :)
Powered by blists - more mailing lists