[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5dbb765-8c93-4050-84e1-c0f63b43d6c2@xs4all.nl>
Date: Wed, 15 May 2024 11:17:12 +0200
From: Hans Verkuil <hverkuil@...all.nl>
To: Jaroslav Kysela <perex@...ex.cz>, Shengjiu Wang
<shengjiu.wang@...il.com>,
Amadeusz Sławiński
<amadeuszx.slawinski@...ux.intel.com>
Cc: Mauro Carvalho Chehab <mchehab@...nel.org>,
Mark Brown <broonie@...nel.org>, Takashi Iwai <tiwai@...e.de>,
Sebastian Fricke <sebastian.fricke@...labora.com>,
Shengjiu Wang <shengjiu.wang@....com>, sakari.ailus@....fi,
tfiga@...omium.org, m.szyprowski@...sung.com, linux-media@...r.kernel.org,
linux-kernel@...r.kernel.org, Xiubo.Lee@...il.com, festevam@...il.com,
nicoleotsuka@...il.com, lgirdwood@...il.com, tiwai@...e.com,
alsa-devel@...a-project.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
> On 09. 05. 24 13:13, Jaroslav Kysela wrote:
>> On 09. 05. 24 12:44, Shengjiu Wang wrote:
>>>>> mem2mem is just like the decoder in the compress pipeline. which is
>>>>> one of the components in the pipeline.
>>>>
>>>> I was thinking of loopback with endpoints using compress streams,
>>>> without physical endpoint, something like:
>>>>
>>>> compress playback (to feed data from userspace) -> DSP (processing) ->
>>>> compress capture (send data back to userspace)
>>>>
>>>> Unless I'm missing something, you should be able to process data as fast
>>>> as you can feed it and consume it in such case.
>>>>
>>>
>>> Actually in the beginning I tried this, but it did not work well.
>>> ALSA needs time control for playback and capture, playback and capture
>>> needs to synchronize. Usually the playback and capture pipeline is
>>> independent in ALSA design, but in this case, the playback and capture
>>> should synchronize, they are not independent.
>>
>> The core compress API core no strict timing constraints. You can eventually0
>> have two half-duplex compress devices, if you like to have really independent
>> mechanism. If something is missing in API, you can extend this API (like to
>> inform the user space that it's a producer/consumer processing without any
>> relation to the real time). I like this idea.
>
> I was thinking more about this. If I am right, the mentioned use in gstreamer
> is supposed to run the conversion (DSP) job in "one shot" (can be handled
> using one system call like blocking ioctl). The goal is just to offload the
> CPU work to the DSP (co-processor). If there are no requirements for the
> queuing, we can implement this ioctl in the compress ALSA API easily using the
> data management through the dma-buf API. We can eventually define a new
> direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow
> handle this new data scheme. The API may be extended later on real demand, of
> course.
>
> Otherwise all pieces are already in the current ALSA compress API
> (capabilities, params, enumeration). The realtime controls may be created
> using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
If there is a way to do this reasonably cleanly in the ALSA API, then that
obviously is much better from my perspective as a media maintainer.
My understanding was always that it can't be done (or at least not without
a major effort) in ALSA, and in that case V4L2 is a decent plan B, but based
on this I gather that it is possible in ALSA after all.
So can I shelf this patch series for now?
Regards,
Hans
Powered by blists - more mailing lists