lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtGiNF0wsCRhTtOF@LQ3V64L9R2>
Date: Fri, 30 Aug 2024 11:43:00 +0100
From: Joe Damato <jdamato@...tly.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, edumazet@...gle.com, amritha.nambiar@...el.com,
	sridhar.samudrala@...el.com, sdf@...ichev.me, bjorn@...osinc.com,
	hch@...radead.org, willy@...radead.org,
	willemdebruijn.kernel@...il.com, skhawaja@...gle.com,
	Martin Karsten <mkarsten@...terloo.ca>,
	Donald Hunter <donald.hunter@...il.com>,
	"David S. Miller" <davem@...emloft.net>,
	Paolo Abeni <pabeni@...hat.com>,
	Jesper Dangaard Brouer <hawk@...nel.org>,
	Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
	Daniel Jurgens <danielj@...dia.com>,
	open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next 5/5] netdev-genl: Support setting per-NAPI
 config values

On Thu, Aug 29, 2024 at 03:31:05PM -0700, Jakub Kicinski wrote:
> On Thu, 29 Aug 2024 13:12:01 +0000 Joe Damato wrote:
> > +	napi = napi_by_id(napi_id);
> > +	if (napi)
> > +		err = netdev_nl_napi_set_config(napi, info);
> > +	else
> > +		err = -EINVAL;
> 
> if (napi) {
> ...
> } else {
> 	NL_SET_BAD_ATTR(info->extack, info->attrs[NETDEV_A_NAPI_ID])
> 	err = -ENOENT;
> }

Thanks, I'll make that change in the v2.

Should I send a Fixes for the same pattern in
netdev_nl_napi_get_doit ?
 
> > +      doc: Set configurable NAPI instance settings.
> 
> We should pause and think here how configuring NAPI params should
> behave. NAPI instances are ephemeral, if you close and open the
> device (or for some drivers change any BPF or ethtool setting)
> the NAPIs may get wiped and recreated, discarding all configuration.
> 
> This is not how the sysfs API behaves, the sysfs settings on the device
> survive close. It's (weirdly?) also not how queues behave, because we
> have struct netdev{_rx,}_queue to store stuff persistently. Even tho
> you'd think queues are as ephemeral as NAPIs if not more.
> 
> I guess we can either document this, and move on (which may be fine,
> you have more practical experience than me). Or we can add an internal
> concept of a "channel" (which perhaps maybe if you squint is what
> ethtool -l calls NAPIs?) or just "napi_storage" as an array inside
> net_device and store such config there. For simplicity of matching
> config to NAPIs we can assume drivers add NAPI instances in order. 
> If driver wants to do something more fancy we can add a variant of
> netif_napi_add() which specifies the channel/storage to use.
> 
> Thoughts? I may be overly sensitive to the ephemeral thing, maybe
> I work with unfortunate drivers...

Thanks for pointing this out. I think this is an important case to
consider. Here's how I'm thinking about it.

There are two cases:

1) sysfs setting is used by existing/legacy apps: If the NAPIs are
discarded and recreated, the code I added to netif_napi_add_weight
in patch 1 and 3 should take care of that case preserving how sysfs
works today, I believe. I think we are good on this case ?

2) apps using netlink to set various custom settings. This seems
like a case where a future extension can be made to add a notifier
for NAPI changes (like the netdevice notifier?).

If you think this is a good idea, then we'd do something like:
  1. Document that the NAPI settings are wiped when NAPIs are wiped
  2. In the future (not part of this series) a NAPI notifier is
     added
  3. User apps can then listen for NAPI create/delete events
     and update settings when a NAPI is created. It would be
     helpful, I think, for user apps to know about NAPI
     create/delete events in general because it means NAPI IDs are
     changing.

One could argue:

  When wiping/recreating a NAPI for an existing HW queue, that HW
  queue gets a new NAPI ID associated with it. User apps operating
  at this level probably care about NAPI IDs changing (as it affects
  epoll busy poll). Since the settings in this series are per-NAPI
  (and not per HW queue), the argument could be that user apps need
  to setup NAPIs when they are created and settings do not persist
  between NAPIs with different IDs even if associated with the same
  HW queue.

Admittedly, from the perspective of a user it would be nice if a new
NAPI created for an existing HW queue retained the previous
settings so that I, as the user, can do less work.

But, what happens if a HW queue is destroyed and recreated? Will any
HW settings be retained? And does that have any influence on what we
do in software? See below.

This part of your message:

> we can assume drivers add NAPI instances in order. If driver wants
> to do something more fancy we can add a variant of
> netif_napi_add() which specifies the channel/storage to use.

assuming drivers will "do a thing", so to speak, makes me uneasy.

I started to wonder: how do drivers handle per-queue HW IRQ coalesce
settings when queue counts increase? It's a different, but adjacent
problem, I think.

I tried a couple experiments on mlx5 and got very strange results
suitable for their own thread and I didn't want to get this thread
too far off track.

I think you have much more practical experience when it comes to
dealing with drivers, so I am happy to follow your lead on this one,
but assuming drivers will "do a thing" seems mildly scary to me with
limited driver experience.

My two goals with this series are:
  1. Make it possible to set these values per NAPI
  2. Unblock the IRQ suspension series by threading the suspend
     parameter through the code path carved in this series

So, I'm happy to proceed with this series as you prefer whether
that's documentation or "napi_storage"; I think you are probably the
best person to answer this question :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ