lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240503094225.47fe4836@sal.lan>
Date: Fri, 3 May 2024 09:42:25 +0100
From: Mauro Carvalho Chehab <mchehab@...nel.org>
To: Mark Brown <broonie@...nel.org>
Cc: Takashi Iwai <tiwai@...e.de>, Sebastian Fricke
 <sebastian.fricke@...labora.com>, Shengjiu Wang <shengjiu.wang@....com>,
 hverkuil@...all.nl, sakari.ailus@....fi, tfiga@...omium.org,
 m.szyprowski@...sung.com, linux-media@...r.kernel.org,
 linux-kernel@...r.kernel.org, shengjiu.wang@...il.com, Xiubo.Lee@...il.com,
 festevam@...il.com, nicoleotsuka@...il.com, lgirdwood@...il.com,
 perex@...ex.cz, tiwai@...e.com, alsa-devel@...a-project.org,
 linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework

Em Fri, 3 May 2024 10:47:19 +0900
Mark Brown <broonie@...nel.org> escreveu:

> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
> > Mauro Carvalho Chehab <mchehab@...nel.org> escreveu:  
> 
> > > There are still time control associated with it, as audio and video
> > > needs to be in sync. This is done by controlling the buffers size 
> > > and could be fine-tuned by checking when the buffer transfer is done.  
> 
> ...
> 
> > Just complementing: on media, we do this per video buffer (or
> > per half video buffer). A typical use case on cameras is to have
> > buffers transferred 30 times per second, if the video was streamed 
> > at 30 frames per second.   
> 
> IIRC some big use case for this hardware was transcoding so there was a
> desire to just go at whatever rate the hardware could support as there
> is no interactive user consuming the output as it is generated.

Indeed, codecs could be used to just do transcoding, but I would
expect it to be a border use case. See, as the chipsets implementing 
codecs are typically the ones used on mobiles, I would expect that
the major use cases to be to watch audio and video and to participate
on audio/video conferences.

Going further, the codec API may end supporting not only transcoding
(which is something that CPU can usually handle without too much
processing) but also audio processing that may require more 
complex algorithms - even deep learning ones - like background noise
removal, echo detection/removal, volume auto-gain, audio enhancement
and such.

On other words, the typical use cases will either have input
or output being a physical hardware (microphone or speaker).

> > I would assume that, on an audio/video stream, the audio data
> > transfer will be programmed to also happen on a regular interval.  
> 
> With audio the API is very much "wake userspace every Xms".

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ