lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 15 Aug 2023 17:16:38 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: David Ahern <dsahern@...nel.org>
Cc: Mina Almasry <almasrymina@...gle.com>, netdev@...r.kernel.org, Eric
 Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Jesper
 Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas
 <ilias.apalodimas@...aro.org>, Magnus Karlsson <magnus.karlsson@...el.com>,
 Willem de Bruijn <willemdebruijn.kernel@...il.com>, sdf@...gle.com, Willem
 de Bruijn <willemb@...gle.com>, Kaiyuan Zhang <kaiyuanz@...gle.com>
Subject: Re: [RFC PATCH v2 02/11] netdev: implement netlink api to bind
 dma-buf to netdevice

On Sun, 13 Aug 2023 19:10:35 -0600 David Ahern wrote:
> Also, this suggests that the Rx queue is unique to the flow. I do not
> recall a netdev API to create H/W queues on the fly (only a passing
> comment from Kuba), so how is the H/W queue (or queue set since a
> completion queue is needed as well) created for the flow?
> And in turn if it is unique to the flow, what deletes the queue if
> an app does not do a proper cleanup? If the queue sticks around,
> the dmabuf references stick around.

Let's start sketching out the design for queue config.
Without sliding into scope creep, hopefully.

Step one - I think we can decompose the problem into:
 A) flow steering
 B) object lifetime and permissions
 C) queue configuration (incl. potentially creating / destroying queues)

These come together into use scenarios like:
 #1 - partitioning for containers - when high perf containers share
      a machine each should get an RSS context on the physical NIC
      to have predictable traffic<>CPU placement, they may also have
      different preferences on how the queues are configured, maybe
      XDP, too?
 #2 - fancy page pools within the host (e.g. huge pages)
 #3 - very fancy page pools not within the host (Mina's work)
 #4 - XDP redirect target (allowing XDP_REDIRECT without installing XDP
      on the target)
 #5 - busy polling - admittedly a bit theoretical, I don't know of
      anyone busy polling in real life, but one of the problems today
      is that setting it up requires scraping random bits of info from
      sysfs and a lot of hoping.

Flow steering (A) is there today, to a sufficient extent, I think,
so we can defer on that. Sooner or later we should probably figure
out if we want to continue down the unruly path of TC offloads or
just give up and beef up ethtool.

I don't have a good sense of what a good model for cleanup and
permissions is (B). All I know is that if we need to tie things to
processes netlink can do it, and we shouldn't have to create our
own FS and special file descriptors...

And then there's (C) which is the main part to talk about.
The first step IMHO is to straighten out the configuration process.
Currently we do:

 user -> thin ethtool API --------------------> driver
                              netdev core <---'

By "straighten" I mean more of a:

 user -> thin ethtool API ---> netdev core ---> driver

flow. This means core maintains the full expected configuration,
queue count and their parameters and driver creates those queues
as instructed.

I'd imagine we'd need 4 basic ops:
 - queue_mem_alloc(dev, cfg) -> queue_mem
 - queue_mem_free(dev, cfg, queue_mem)
 - queue_start(dev, queue info, cfg, queue_mem) -> errno
 - queue_stop(dev, queue info, cfg)

The mem_alloc/mem_free takes care of the commonly missed requirement to
not take the datapath down until resources are allocated for new config.

Core then sets all the queues up after ndo_open, and tears down before
ndo_stop. In case of an ethtool -L / -G call or enabling / disabling XDP
core can handle the entire reconfiguration dance.

The cfg object needs to contain all queue configuration, including 
the page pool parameters.

If we have an abstract model of the configuration in the core we can
modify it much more easily, I hope. I mean - the configuration will be
somewhat detached from what's instantiated in the drivers.

I'd prefer to go as far as we can without introducing a driver callback
to "check if it can support a config change", and try to rely on
(static) capabilities instead. This allows more of the validation to
happen in the core and also lends itself naturally to exporting the
capabilities to the user.

Checking the use cases:

 #1 - partitioning for containers - storing the cfg in the core gives
      us a neat ability to allow users to set the configuration on RSS
      context
 #2, #3 - page pools - we can make page_pool_create take cfg and read whatever
      params we want from there, memory provider, descriptor count, recycling
      ring size etc. Also for header-data-split we may want different settings
      per queue so again cfg comes in handy
 #4 - XDP redirect target - we should spawn XDP TX queues independently from
      the XDP configuration

That's all I have thought up in terms of direction.
Does that make sense? What are the main gaps? Other proposals?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ