[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240328203808.GL651713@kernel.org>
Date: Thu, 28 Mar 2024 20:38:08 +0000
From: Simon Horman <horms@...nel.org>
To: Mina Almasry <almasrymina@...gle.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-alpha@...r.kernel.org,
linux-mips@...r.kernel.org, linux-parisc@...r.kernel.org,
sparclinux@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, bpf@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-media@...r.kernel.org,
dri-devel@...ts.freedesktop.org,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>,
Richard Henderson <richard.henderson@...aro.org>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>,
Matt Turner <mattst88@...il.com>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
Helge Deller <deller@....de>, Andreas Larsson <andreas@...sler.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Arnd Bergmann <arnd@...db.de>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Steffen Klassert <steffen.klassert@...unet.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
David Ahern <dsahern@...nel.org>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Shuah Khan <shuah@...nel.org>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Pavel Begunkov <asml.silence@...il.com>, David Wei <dw@...idwei.uk>,
Jason Gunthorpe <jgg@...pe.ca>,
Yunsheng Lin <linyunsheng@...wei.com>,
Shailend Chand <shailend@...gle.com>,
Harshitha Ramamurthy <hramamurthy@...gle.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Jeroen de Borst <jeroendb@...gle.com>,
Praveen Kaligineedi <pkaligineedi@...gle.com>,
Willem de Bruijn <willemb@...gle.com>,
Kaiyuan Zhang <kaiyuanz@...gle.com>
Subject: Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to
netdevice
On Thu, Mar 28, 2024 at 11:55:23AM -0700, Mina Almasry wrote:
> On Thu, Mar 28, 2024 at 11:28 AM Simon Horman <horms@...nel.org> wrote:
> >
> > On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote:
> > > Add a netdev_dmabuf_binding struct which represents the
> > > dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
> > > rx queues on the netdevice. On the binding, the dma_buf_attach
> > > & dma_buf_map_attachment will occur. The entries in the sg_table from
> > > mapping will be inserted into a genpool to make it ready
> > > for allocation.
> > >
> > > The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
> > > holds the dma-buf offset of the base of the chunk and the dma_addr of
> > > the chunk. Both are needed to use allocations that come from this chunk.
> > >
> > > We create a new type that represents an allocation from the genpool:
> > > net_iov. We setup the net_iov allocation size in the
> > > genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
> > > allocated by the page pool and given to the drivers.
> > >
> > > The user can unbind the dmabuf from the netdevice by closing the netlink
> > > socket that established the binding. We do this so that the binding is
> > > automatically unbound even if the userspace process crashes.
> > >
> > > The binding and unbinding leaves an indicator in struct netdev_rx_queue
> > > that the given queue is bound, but the binding doesn't take effect until
> > > the driver actually reconfigures its queues, and re-initializes its page
> > > pool.
> > >
> > > The netdev_dmabuf_binding struct is refcounted, and releases its
> > > resources only when all the refs are released.
> > >
> > > Signed-off-by: Willem de Bruijn <willemb@...gle.com>
> > > Signed-off-by: Kaiyuan Zhang <kaiyuanz@...gle.com>
> > > Signed-off-by: Mina Almasry <almasrymina@...gle.com>
> >
> > ...
> >
> > > +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> > > + struct net_devmem_dmabuf_binding *binding)
> > > +{
> > > + struct netdev_rx_queue *rxq;
> > > + u32 xa_idx;
> > > + int err;
> > > +
> > > + if (rxq_idx >= dev->num_rx_queues)
> > > + return -ERANGE;
> > > +
> > > + rxq = __netif_get_rx_queue(dev, rxq_idx);
> > > + if (rxq->mp_params.mp_priv)
> > > + return -EEXIST;
> > > +
> > > + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
> > > + GFP_KERNEL);
> > > + if (err)
> > > + return err;
> > > +
> > > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> > > + * race with another thread that is also modifying this value. However,
> > > + * the driver may read this config while it's creating its * rx-queues.
> > > + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> > > + */
> > > + WRITE_ONCE(rxq->mp_params.mp_ops, &dmabuf_devmem_ops);
> >
> > Hi Mina,
> >
> > This causes a build failure because mabuf_devmem_ops is not added until a
> > subsequent patch in this series.
> >
>
> My apologies. I do notice the failure in patchwork now. I'll do a
> patch by patch build for the next iteration.
Thanks, much appreciated.
> > > + WRITE_ONCE(rxq->mp_params.mp_priv, binding);
> > > +
> > > + err = net_devmem_restart_rx_queue(dev, rxq_idx);
> > > + if (err)
> > > + goto err_xa_erase;
> > > +
> > > + return 0;
> > > +
> > > +err_xa_erase:
> > > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
> > > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
> > > + xa_erase(&binding->bound_rxq_list, xa_idx);
> > > +
> > > + return err;
> > > +}
> >
> > ...
>
>
>
> --
> Thanks,
> Mina
>
Powered by blists - more mailing lists