lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230818190653.78ca6e5a@kernel.org>
Date: Fri, 18 Aug 2023 19:06:53 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: David Ahern <dsahern@...nel.org>
Cc: Mina Almasry <almasrymina@...gle.com>, Praveen Kaligineedi
 <pkaligineedi@...gle.com>, Willem de Bruijn
 <willemdebruijn.kernel@...il.com>, netdev@...r.kernel.org, Eric Dumazet
 <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Jesper Dangaard
 Brouer <hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 Magnus Karlsson <magnus.karlsson@...el.com>, sdf@...gle.com, Willem de
 Bruijn <willemb@...gle.com>, Kaiyuan Zhang <kaiyuanz@...gle.com>
Subject: Re: [RFC PATCH v2 02/11] netdev: implement netlink api to bind
 dma-buf to netdevice

On Fri, 18 Aug 2023 19:34:32 -0600 David Ahern wrote:
> On 8/18/23 3:52 PM, Mina Almasry wrote:
> > The sticking points are:
> > 1. From David: this proposal doesn't give an application the ability
> > to flush an rx queue, which means that we have to rely on a driver
> > reset that affects all queues to refill the rx queue buffers.  
> 
> Generically, the design needs to be able to flush (or invalidate) all
> references to the dma-buf once the process no longer "owns" it.

Are we talking about the ability for the app to flush the queue
when it wants to (do no idea what)? Or auto-flush when app crashes?

> > 2. From Jakub: the uAPI and implementation here needs to be in line
> > with his general direction & extensible to apply to existing use cases
> > `ethtool -L/-G`, etc.  
> 
> I think this is a bit more open ended given the openness of the netdev
> netlink API. i.e., managing a H/W queue (create, delete, stop / flush,
> associate a page_pool) could be done through this API.
> 
> > 
> > AFAIU this is what I need to do in the next version:
> > 
> > 1. The uAPI will be changed such that it will either re-configure an
> > existing queue to bind it to the dma-buf, or allocate a new queue
> > bound to the dma-buf (not sure which is better at the moment). Either  
> 
> 1. API to manage a page-pool (create, delete, update).

I wasn't anticipating a "create page pool" API.

I was thinking of a scheme where user space sets page pool parameters,
but the driver still creates the pool.

But I guess it is doable. More work, tho. Are there ibverbs which
can do it? lol.

> 2. API to add and remove a dma-buf (or host memory buffer) with a
> page-pool. Remove may take time to flush references pushed to hardware
> so this would be asynchronous.
> 
> 3. Create a queue or use an existing queue id and associate a page-pool
> with it.
> 
> > way, the configuration will take place immediately, and not rely on an
> > entire driver reset to actuate the change.  
> 
> yes
> 
> > 
> > 2. The uAPI will be changed such that if the netlink socket is closed,
> > or the process dies, the rx queue will be unbound from the dma-buf or
> > the rx queue will be freed entirely (again, not sure which is better  
> 
> I think those are separate actions. But, if the queue was created by and
> referenced by a process, then closing an fd means it should be freed.
> 
> > at the moment). The configuration will take place immediately without
> > relying on a driver reset.  
> 
> yes on the reset.
> 
> > 
> > 3. I will add 4 new net_device_ops that Jakub specified:
> > queue_mem_alloc/free(), and queue_start/stop().
> > 
> > 4. The uAPI mentioned in #1 will use the new net_device_ops to
> > allocate or reconfigure a queue attached to the provided dma-buf.

I'd leave 2, 3, 4 alone for now. Focus on binding a page pool to 
an existing queue.

> > Does this sound roughly reasonable here?
> > 
> > AFAICT the only technical difficulty is that I'm not sure it's
> > feasible for a driver to start or stop 1 rx-queue without triggering a
> > full driver reset. The (2) drivers I looked at both do a full reset to
> > change any queue configuration. I'll investigate.  
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ