[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4170720a-f7bb-484e-9235-a8720cec92c1@linux.intel.com>
Date: Wed, 15 May 2024 09:04:41 -0500
From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
To: Jaroslav Kysela <perex@...ex.cz>, Shengjiu Wang
<shengjiu.wang@...il.com>,
Amadeusz Sławiński
<amadeuszx.slawinski@...ux.intel.com>
Cc: Hans Verkuil <hverkuil@...all.nl>,
Mauro Carvalho Chehab <mchehab@...nel.org>, Mark Brown <broonie@...nel.org>,
Takashi Iwai <tiwai@...e.de>,
Sebastian Fricke <sebastian.fricke@...labora.com>,
Shengjiu Wang <shengjiu.wang@....com>, sakari.ailus@....fi,
tfiga@...omium.org, m.szyprowski@...sung.com, linux-media@...r.kernel.org,
linux-kernel@...r.kernel.org, Xiubo.Lee@...il.com, festevam@...il.com,
nicoleotsuka@...il.com, lgirdwood@...il.com, tiwai@...e.com,
alsa-devel@...a-project.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework
On 5/9/24 06:13, Jaroslav Kysela wrote:
> On 09. 05. 24 12:44, Shengjiu Wang wrote:
>>>> mem2mem is just like the decoder in the compress pipeline. which is
>>>> one of the components in the pipeline.
>>>
>>> I was thinking of loopback with endpoints using compress streams,
>>> without physical endpoint, something like:
>>>
>>> compress playback (to feed data from userspace) -> DSP (processing) ->
>>> compress capture (send data back to userspace)
>>>
>>> Unless I'm missing something, you should be able to process data as fast
>>> as you can feed it and consume it in such case.
>>>
>>
>> Actually in the beginning I tried this, but it did not work well.
>> ALSA needs time control for playback and capture, playback and capture
>> needs to synchronize. Usually the playback and capture pipeline is
>> independent in ALSA design, but in this case, the playback and capture
>> should synchronize, they are not independent.
>
> The core compress API core no strict timing constraints. You can
> eventually0 have two half-duplex compress devices, if you like to have
> really independent mechanism. If something is missing in API, you can
> extend this API (like to inform the user space that it's a
> producer/consumer processing without any relation to the real time). I
> like this idea.
The compress API was never intended to be used this way. It was meant to
send compressed data to a DSP for rendering, and keep the host processor
in a low-power state while the DSP local buffer was drained. There was
no intent to do a loop back to the host, because that keeps the host in
a high-power state and probably negates the power savings due to a DSP.
The other problem with the loopback is that the compress stuff is
usually a "Front-End" in ASoC/DPCM parlance, and we don't have a good
way to do a loopback between Front-Ends. The entire framework is based
on FEs being connected to BEs.
One problem that I can see for ASRC is that it's not clear when the data
will be completely processed on the "capture" stream when you stop the
"playback" stream. There's a non-zero risk of having a truncated output
or waiting for data that will never be generated.
In other words, it might be possible to reuse/extend the compress API
for a 'coprocessor' approach without any rendering to traditional
interfaces, but it's uncharted territory.
Powered by blists - more mailing lists