[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75Vf8QhosJw79U97rA6u0KHY9avmzTMBUqEyWkY6jxBuPYg@mail.gmail.com>
Date: Mon, 28 Mar 2022 23:46:02 +0300
From: Andy Shevchenko <andy.shevchenko@...il.com>
To: Paul Cercueil <paul@...pouillou.net>
Cc: Jonathan Cameron <jic23@...nel.org>,
Michael Hennerich <Michael.Hennerich@...log.com>,
Lars-Peter Clausen <lars@...afoo.de>,
Christian König <christian.koenig@....com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Jonathan Corbet <corbet@....net>,
Alexandru Ardelean <ardeleanalex@...il.com>,
dri-devel <dri-devel@...ts.freedesktop.org>,
linaro-mm-sig@...ts.linaro.org,
Linux Documentation List <linux-doc@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-iio <linux-iio@...r.kernel.org>
Subject: Re: [PATCH v2 05/12] iio: core: Add new DMABUF interface infrastructure
On Tue, Feb 8, 2022 at 5:26 PM Paul Cercueil <paul@...pouillou.net> wrote:
>
> Add the necessary infrastructure to the IIO core to support a new
> optional DMABUF based interface.
>
> The advantage of this new DMABUF based interface vs. the read()
> interface, is that it avoids an extra copy of the data between the
> kernel and userspace. This is particularly userful for high-speed
useful
> devices which produce several megabytes or even gigabytes of data per
> second.
>
> The data in this new DMABUF interface is managed at the granularity of
> DMABUF objects. Reducing the granularity from byte level to block level
> is done to reduce the userspace-kernelspace synchronization overhead
> since performing syscalls for each byte at a few Mbps is just not
> feasible.
>
> This of course leads to a slightly increased latency. For this reason an
> application can choose the size of the DMABUFs as well as how many it
> allocates. E.g. two DMABUFs would be a traditional double buffering
> scheme. But using a higher number might be necessary to avoid
> underflow/overflow situations in the presence of scheduling latencies.
>
> As part of the interface, 2 new IOCTLs have been added:
>
> IIO_BUFFER_DMABUF_ALLOC_IOCTL(struct iio_dmabuf_alloc_req *):
> Each call will allocate a new DMABUF object. The return value (if not
> a negative errno value as error) will be the file descriptor of the new
> DMABUF.
>
> IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
> Place the DMABUF object into the queue pending for hardware process.
>
> These two IOCTLs have to be performed on the IIO buffer's file
> descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
>
> To access the data stored in a block by userspace the block must be
> mapped to the process's memory. This is done by calling mmap() on the
> DMABUF's file descriptor.
>
> Before accessing the data through the map, you must use the
> DMA_BUF_IOCTL_SYNC(struct dma_buf_sync *) ioctl, with the
> DMA_BUF_SYNC_START flag, to make sure that the data is available.
> This call may block until the hardware is done with this block. Once
> you are done reading or writing the data, you must use this ioctl again
> with the DMA_BUF_SYNC_END flag, before enqueueing the DMABUF to the
> kernel's queue.
>
> If you need to know when the hardware is done with a DMABUF, you can
> poll its file descriptor for the EPOLLOUT event.
>
> Finally, to destroy a DMABUF object, simply call close() on its file
> descriptor.
...
> v2: Only allow the new IOCTLs on the buffer FD created with
> IIO_BUFFER_GET_FD_IOCTL().
Move changelogs after the cutter '--- ' line.
...
> static const struct file_operations iio_buffer_chrdev_fileops = {
> .owner = THIS_MODULE,
> .llseek = noop_llseek,
> .read = iio_buffer_read,
> .write = iio_buffer_write,
> + .unlocked_ioctl = iio_buffer_chrdev_ioctl,
> + .compat_ioctl = compat_ptr_ioctl,
Is this member always available (implying the kernel configuration)?
...
> +#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000000
No flags available right now?
...
> + * @bytes_used: number of bytes used in this DMABUF for the data transfer.
> + * If zero, the full buffer is used.
Wouldn't be error prone to have 0 defined like this?
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists