lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210814073019.GC21175@lst.de>
Date:   Sat, 14 Aug 2021 09:30:19 +0200
From:   Christoph Hellwig <hch@....de>
To:     Paul Cercueil <paul@...pouillou.net>
Cc:     Jonathan Cameron <jic23@...nel.org>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Christian König <christian.koenig@....com>,
        Christoph Hellwig <hch@....de>, linux-iio@...r.kernel.org,
        io-uring@...r.kernel.org, linux-media@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Michael Hennerich <Michael.Hennerich@...log.com>,
        Alexandru Ardelean <ardeleanalex@...il.com>,
        dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: IIO, dmabuf, io_uring

On Fri, Aug 13, 2021 at 01:41:26PM +0200, Paul Cercueil wrote:
> Hi,
>
> A few months ago we (ADI) tried to upstream the interface we use with our 
> high-speed ADCs and DACs. It is a system with custom ioctls on the iio 
> device node to dequeue and enqueue buffers (allocated with 
> dma_alloc_coherent), that can then be mmap'd by userspace applications. 
> Anyway, it was ultimately denied entry [1]; this API was okay in ~2014 when 
> it was designed but it feels like re-inventing the wheel in 2021.
>
> Back to the drawing table, and we'd like to design something that we can 
> actually upstream. This high-speed interface looks awfully similar to 
> DMABUF, so we may try to implement a DMABUF interface for IIO, unless 
> someone has a better idea.

To me this does sound a lot like a dma buf use case.  The interesting
question to me is how to signal arrival of new data, or readyness to
consume more data.  I suspect that people that are actually using
dmabuf heavily at the moment (dri/media folks) might be able to chime
in a little more on that.

> Our first usecase is, we want userspace applications to be able to dequeue 
> buffers of samples (from ADCs), and/or enqueue buffers of samples (for 
> DACs), and to be able to manipulate them (mmapped buffers). With a DMABUF 
> interface, I guess the userspace application would dequeue a dma buffer 
> from the driver, mmap it, read/write the data, unmap it, then enqueue it to 
> the IIO driver again so that it can be disposed of. Does that sound sane?
>
> Our second usecase is - and that's where things get tricky - to be able to 
> stream the samples to another computer for processing, over Ethernet or 
> USB. Our typical setup is a high-speed ADC/DAC on a dev board with a FPGA 
> and a weak soft-core or low-power CPU; processing the data in-situ is not 
> an option. Copying the data from one buffer to another is not an option 
> either (way too slow), so we absolutely want zero-copy.
>
> Usual userspace zero-copy techniques (vmsplice+splice, MSG_ZEROCOPY etc) 
> don't really work with mmapped kernel buffers allocated for DMA [2] and/or 
> have a huge overhead, so the way I see it, we would also need DMABUF 
> support in both the Ethernet stack and USB (functionfs) stack. However, as 
> far as I understood, DMABUF is mostly a DRM/V4L2 thing, so I am really not 
> sure we have the right idea here.
>
> And finally, there is the new kid in town, io_uring. I am not very literate 
> about the topic, but it does not seem to be able to handle DMA buffers 
> (yet?). The idea that we could dequeue a buffer of samples from the IIO 
> device and send it over the network in one single syscall is appealing, 
> though.

Think of io_uring really just as an async syscall layer.  It doesn't
replace DMA buffers, but can be used as a different and for some
workloads more efficient way to dispatch syscalls.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ