[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zt_jn5RQAndpKjoE@LQ3V64L9R2.homenet.telecomitalia.it>
Date: Tue, 10 Sep 2024 08:13:51 +0200
From: Joe Damato <jdamato@...tly.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, skhawaja@...gle.com,
sdf@...ichev.me, bjorn@...osinc.com, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>, Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC net-next v2 1/9] net: napi: Add napi_storage
On Mon, Sep 09, 2024 at 04:40:39PM -0700, Jakub Kicinski wrote:
> On Sun, 8 Sep 2024 16:06:35 +0000 Joe Damato wrote:
> > Add a persistent NAPI storage area for NAPI configuration to the core.
> > Drivers opt-in to setting the storage for a NAPI by passing an index
> > when calling netif_napi_add_storage.
> >
> > napi_storage is allocated in alloc_netdev_mqs, freed in free_netdev
> > (after the NAPIs are deleted), and set to 0 when napi_enable is called.
>
> > enum {
> > @@ -2009,6 +2019,9 @@ enum netdev_reg_state {
> > * @dpll_pin: Pointer to the SyncE source pin of a DPLL subsystem,
> > * where the clock is recovered.
> > *
> > + * @napi_storage: An array of napi_storage structures containing per-NAPI
> > + * settings.
>
> FWIW you can use inline kdoc, with the size of the struct it's easier
> to find it. Also this doesn't need to be accessed from fastpath so you
> can move it down.
OK. I figured since it was being deref'd in napi_complete_done
(where we previously read napi_defer_hard_irqs and
gro_flush_timeout) it needed to be in the fast path.
I'll move it down for the next RFC.
> > +/**
> > + * netif_napi_add_storage - initialize a NAPI context and set storage area
> > + * @dev: network device
> > + * @napi: NAPI context
> > + * @poll: polling function
> > + * @weight: the poll weight of this NAPI
> > + * @index: the NAPI index
> > + */
> > +static inline void
> > +netif_napi_add_storage(struct net_device *dev, struct napi_struct *napi,
> > + int (*poll)(struct napi_struct *, int), int weight,
> > + int index)
> > +{
> > + napi->index = index;
> > + napi->napi_storage = &dev->napi_storage[index];
> > + netif_napi_add_weight(dev, napi, poll, weight);
>
> You can drop the weight param, just pass NAPI_POLL_WEIGHT.
OK will do.
> Then -- change netif_napi_add_weight() to prevent if from
> calling napi_hash_add() if it has index >= 0
OK.
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 22c3f14d9287..ca90e8cab121 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -6719,6 +6719,9 @@ void napi_enable(struct napi_struct *n)
> > if (n->dev->threaded && n->thread)
> > new |= NAPIF_STATE_THREADED;
> > } while (!try_cmpxchg(&n->state, &val, new));
> > +
> > + if (n->napi_storage)
> > + memset(n->napi_storage, 0, sizeof(*n->napi_storage));
OK, your comments below will probably make more sense to me after I
try implementing it, but I'll definitely have some questions.
> And here inherit the settings and the NAPI ID from storage, then call
> napi_hash_add(). napi_hash_add() will need a minor diff to use the
> existing ID if already assigned.
I don't think I realized we settled on the NAPI ID being persistent.
I'm not opposed to that, I just think I missed that part in the
previous conversation.
I'll give it a shot and see what the next RFC looks like.
> And the inverse of that has to happen in napi_disable() (unhash, save
> settings to storage), and __netif_napi_del() (don't unhash if it has
> index).
>
> I think that should work?
Only one way to find out ;)
Separately: your comment about documenting rings to NAPIs... I am
not following that bit.
Is that a thing you meant should be documented for driver writers to
follow to reduce churn ?
Powered by blists - more mailing lists