[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZuS4S-TPa8b2TWXH@LQ3V64L9R2.homenet.telecomitalia.it>
Date: Sat, 14 Sep 2024 00:10:19 +0200
From: Joe Damato <jdamato@...tly.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: netdev@...r.kernel.org, mkarsten@...terloo.ca, kuba@...nel.org,
skhawaja@...gle.com, sdf@...ichev.me, bjorn@...osinc.com,
amritha.nambiar@...el.com, sridhar.samudrala@...el.com,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>, Jiri Pirko <jiri@...nulli.us>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Lorenzo Bianconi <lorenzo@...nel.org>,
David Ahern <dsahern@...nel.org>,
Johannes Berg <johannes.berg@...el.com>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC net-next v3 5/9] net: napi: Add napi_config
On Fri, Sep 13, 2024 at 05:44:07PM -0400, Willem de Bruijn wrote:
> Joe Damato wrote:
> > Several comments on different things below for this patch that I just noticed.
> >
> > On Thu, Sep 12, 2024 at 10:07:13AM +0000, Joe Damato wrote:
> > > Add a persistent NAPI config area for NAPI configuration to the core.
> > > Drivers opt-in to setting the storage for a NAPI by passing an index
> > > when calling netif_napi_add_storage.
> > >
> > > napi_config is allocated in alloc_netdev_mqs, freed in free_netdev
> > > (after the NAPIs are deleted), and set to 0 when napi_enable is called.
> >
> > Forgot to re-read all the commit messages. I will do that for rfcv4
> > and make sure they are all correct; this message is not correct.
> >
> > > Drivers which implement call netif_napi_add_storage will have persistent
> > > NAPI IDs.
> > >
> > > Signed-off-by: Joe Damato <jdamato@...tly.com>
>
> > > @@ -11062,6 +11110,9 @@ struct net_device *alloc_netdev_mqs(int sizeof_priv, const char *name,
> > > return NULL;
> > > }
> > >
> > > + WARN_ON_ONCE(txqs != rxqs);
> >
> > This warning triggers for me on boot every time with mlx5 NICs.
> >
> > The code in mlx5 seems to get the rxq and txq maximums in:
> > drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > mlx5e_create_netdev
> >
> > which does:
> >
> > txqs = mlx5e_get_max_num_txqs(mdev, profile);
> > rxqs = mlx5e_get_max_num_rxqs(mdev, profile);
> >
> > netdev = alloc_etherdev_mqs(sizeof(struct mlx5e_priv), txqs, rxqs);
> >
> > In my case for my device, txqs: 760, rxqs: 63.
> >
> > I would guess that this warning will trigger everytime for mlx5 NICs
> > and would be quite annoying.
> >
> > We may just want to replace the allocation logic to allocate
> > txqs+rxqs, remove the WARN_ON_ONCE, and be OK with some wasted
> > space?
>
> I was about to say that txqs == rxqs is not necessary.
Correct.
> The number of napi config structs you want depends on whether the
> driver configures separate IRQs for Tx and Rx or not.
Correct. This is why I included the mlx4 patch.
> Allocating the max of the two is perhaps sufficient for now.
I don't think I agree. The max of the two means you'll always be
missing some config space if the maximum number of both are
allocated by the user/device.
The WARN_ON_ONCE was added as suggested from a previous conversation
[1], but due to the imbalance in mlx5 (and probably other devices)
the warning will be more of a nuisance and will likely trigger on
every boot for at least mlx5, but probably others.
Regardless of how many we decide to allocate: the point I was making
above was that the WARN_ON_ONCE should likely be removed.
[1]: https://lore.kernel.org/lkml/20240902174944.293dfe4b@kernel.org/
Powered by blists - more mailing lists