[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sez0k661.wl-tiwai@suse.de>
Date: Thu, 02 May 2024 09:46:14 +0200
From: Takashi Iwai <tiwai@...e.de>
To: Mark Brown <broonie@...nel.org>
Cc: Mauro Carvalho Chehab <mchehab@...nel.org>,
Sebastian Fricke <sebastian.fricke@...labora.com>,
Shengjiu Wang <shengjiu.wang@....com>,
hverkuil@...all.nl,
sakari.ailus@....fi,
tfiga@...omium.org,
m.szyprowski@...sung.com,
linux-media@...r.kernel.org,
linux-kernel@...r.kernel.org,
shengjiu.wang@...il.com,
Xiubo.Lee@...il.com,
festevam@...il.com,
nicoleotsuka@...il.com,
lgirdwood@...il.com,
perex@...ex.cz,
tiwai@...e.com,
alsa-devel@...a-project.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework
On Wed, 01 May 2024 03:56:15 +0200,
Mark Brown wrote:
>
> On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> > Mark Brown <broonie@...nel.org> escreveu:
> > > On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
>
> > > The discussion around this originally was that all the audio APIs are
> > > very much centered around real time operations rather than completely
>
> > The media subsystem is also centered around real time. Without real
> > time, you can't have a decent video conference system. Having
> > mem2mem transfers actually help reducing real time delays, as it
> > avoids extra latency due to CPU congestion and/or data transfers
> > from/to userspace.
>
> Real time means strongly tied to wall clock times rather than fast - the
> issue was that all the ALSA APIs are based around pushing data through
> the system based on a clock.
>
> > > That doesn't sound like an immediate solution to maintainer overload
> > > issues... if something like this is going to happen the DRM solution
> > > does seem more general but I'm not sure the amount of stop energy is
> > > proportionate.
>
> > I don't think maintainer overload is the issue here. The main
> > point is to avoid a fork at the audio uAPI, plus the burden
> > of re-inventing the wheel with new codes for audio formats,
> > new documentation for them, etc.
>
> I thought that discussion had been had already at one of the earlier
> versions? TBH I've not really been paying attention to this since the
> very early versions where I raised some similar "why is this in media"
> points and I thought everyone had decided that this did actually make
> sense.
Yeah, it was discussed in v1 and v2 threads, e.g.
https://patchwork.kernel.org/project/linux-media/cover/1690265540-25999-1-git-send-email-shengjiu.wang@nxp.com/#25485573
My argument at that time was how the operation would be, and the point
was that it'd be a "batch-like" operation via M2M without any timing
control. It'd be a very special usage for for ALSA, and if any, it'd
be hwdep -- that is a very hardware-specific API implementation -- or
try compress-offload API, which looks dubious.
OTOH, the argument was that there is already a framework for M2M in
media API and that also fits for the batch-like operation, too. So
was the thread evolved until now.
thanks,
Takashi
Powered by blists - more mailing lists