lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izNuVoe67zdR3_=m+E8X0nLCGYnAJH0dvQZ8QBTNPHESpQ@mail.gmail.com>
Date: Thu, 2 May 2024 10:15:08 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: David Wei <dw@...idwei.uk>
Cc: netdev@...r.kernel.org, Michael Chan <michael.chan@...adcom.com>, 
	Pavan Chebbi <pavan.chebbi@...adcom.com>, 
	Andy Gospodarek <andrew.gospodarek@...adcom.com>, Shailend Chand <shailend@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>, 
	Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [RFC PATCH net-next v1 1/3] queue_api: define queue api

On Wed, May 1, 2024 at 7:44 PM David Wei <dw@...idwei.uk> wrote:
>
> On 2024-04-30 11:00 am, Mina Almasry wrote:
> >
> > Sorry, I think we raced a bit, we updated our interface definition
> > based on your/Jakub's feedback to expose the size of the struct for
> > core to allocate, so it looks like this for us now:
> >
> > +struct netdev_queue_mgmt_ops {
> > +       size_t                  ndo_queue_mem_size;
> > +       int                     (*ndo_queue_mem_alloc)(struct net_device *dev,
> > +                                                      void *per_queue_mem,
> > +                                                      int idx);
> > +       void                    (*ndo_queue_mem_free)(struct net_device *dev,
> > +                                                     void *per_queue_mem);
> > +       int                     (*ndo_queue_start)(struct net_device *dev,
> > +                                                  void *per_queue_mem,
> > +                                                  int idx);
> > +       int                     (*ndo_queue_stop)(struct net_device *dev,
> > +                                                 void *per_queue_mem,
> > +                                                 int idx);
> > +};
> > +
> >
> > The idea is that ndo_queue_mem_size is the size of the memory
> > per_queue_mem points to.
> >
> > The rest of the functions are slightly modified to allow core to
> > allocate the memory and pass in the pointer for the driver to fill
> > in/us. I think Shailend is close to posting the patches, let us know
> > if you see any issues.
> >
>
> Hmm. Thinking about this a bit more, are you having netdev core manage
> all of the queues, i.e. alloc/free during open()/stop()?

No, we do not modify open()/stop(). I think in the future that is the
plan. However when it comes to the future direction of queue
management I think that's more Jakub's purview so I'm leaving it up to
him. For devmem TCP I'm just implementing what we need, and I'm
trusting that it is aligned with the general direction Jakub wants to
take things in eventually as he hasn't (yet) complained in the reviews
:D

> Otherwise how
> can netdev core pass in the old queue mem into ndo_queue_stop(), and
> where is the queue mem stored?
>

What we have in mind, is:

1. driver declares how much memory it needs in ndo_queue_mem_size
2. Core kzalloc's that much memory.
3. Core passes a pointer to that memory to ndo_queue_stop. The driver
fills in the memory and stops the queue.

Then, core will have a pointer to the 'old queue mem'. Core can then
free that memory if allocing/starting a new queue succeeded, or do a
ndo_queue_start(old_mem) if it wishes to start a new queue.

We do something similar for ndo_queue_mem_alloc:

1. driver declares how much memory it needs in ndo_queue_mem_size
2. Core kzallocs's that much memory.
3. Core passes that memory to ndo_queue_mem_alloc. The driver allocs
the resources for a new queue, attaches the resources to the passed
pointer, and returns.

We can also discuss over a public netdev bbb call if face to face time
makes it easier to align quickly.

> Or is it the new queue mem being passed into ndo_queue_stop()?
>
> My takeaway from the discussion on Shailend's last RFC is that for the
> first iteration we'll keep what we had before and have the driver
> alloc/free the qmem. Implementing this for bnxt felt pretty good as
> well.

We can certainly switch back to what we had before, and remove the
ndo_queue_mem_size we added if you've changed your mind, not a big
deal.

-- 
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ