[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zt_kJT9jCy1rLLCr@LQ3V64L9R2.homenet.telecomitalia.it>
Date: Tue, 10 Sep 2024 08:16:05 +0200
From: Joe Damato <jdamato@...tly.com>
To: Stanislav Fomichev <stfomichev@...il.com>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, kuba@...nel.org,
skhawaja@...gle.com, sdf@...ichev.me, bjorn@...osinc.com,
amritha.nambiar@...el.com, sridhar.samudrala@...el.com,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>, Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC net-next v2 1/9] net: napi: Add napi_storage
On Mon, Sep 09, 2024 at 03:37:41PM -0700, Stanislav Fomichev wrote:
> On 09/08, Joe Damato wrote:
> > On Sun, Sep 08, 2024 at 04:06:35PM +0000, Joe Damato wrote:
> > > Add a persistent NAPI storage area for NAPI configuration to the core.
> > > Drivers opt-in to setting the storage for a NAPI by passing an index
> > > when calling netif_napi_add_storage.
> > >
> > > napi_storage is allocated in alloc_netdev_mqs, freed in free_netdev
> > > (after the NAPIs are deleted), and set to 0 when napi_enable is called.
> > >
> > > Signed-off-by: Joe Damato <jdamato@...tly.com>
> > > ---
> > > .../networking/net_cachelines/net_device.rst | 1 +
> > > include/linux/netdevice.h | 34 +++++++++++++++++++
> > > net/core/dev.c | 18 +++++++++-
> > > 3 files changed, 52 insertions(+), 1 deletion(-)
> > >
> >
> > [...]
> >
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -6719,6 +6719,9 @@ void napi_enable(struct napi_struct *n)
> > > if (n->dev->threaded && n->thread)
> > > new |= NAPIF_STATE_THREADED;
> > > } while (!try_cmpxchg(&n->state, &val, new));
> > > +
> > > + if (n->napi_storage)
> > > + memset(n->napi_storage, 0, sizeof(*n->napi_storage));
> >
> > This part is very obviously wrong ;)
> >
> > I think when I was reading the other thread about resetting on
> > channel change [1] I got a bit confused.
> >
> > Maybe what was intended was on napi_enable, do nothing / remove the
> > above code (which I suppose would give the persistence desired?).
> >
> > But modify net/ethtool/channels.c to reset NAPIs to the global
> > (sysfs) settings? Not sure how to balance both persistence with
> > queue count changes in a way that makes sense for users of the API.
> >
> > And, I didn't quite follow the bits about:
> > 1. The proposed ring to NAPI mapping
>
> [..]
>
> > 2. The two step "takeover" which seemed to imply that we might
> > pull napi_id into napi_storage? Or maybe I just read that part
> > wrong?
>
> Yes, the suggestion here is to drop patch #2 from your series and
> keep napi_id as a user facing 'id' for the persistent storage. But,
> obviously, this requires persistent napi_id(s) that survive device
> resets.
>
> The function that allocates new napi_id is napi_hash_add
> from netif_napi_add_weight. So we can do either of the following:
> 1. Keep everything as is, but add the napi_rehash somewhere
> around napi_enable to 'takeover' previously allocated napi_id.
> 2. (preferred) Separate napi_hash_add out of netif_napi_add_weight.
> And have some new napi_hash_with_id(previous_napi_id) to expose it to the
> userspace. Then convert mlx5 to this new interface.
Jakub is this what you were thinking too?
If this is the case, then the netlink code needs to be tweaked to
operate on NAPI IDs again (since they are persistent) instead of
ifindex + napi_storage index?
LMK.
Powered by blists - more mailing lists