lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc12f76e-a2ac-2818-f136-b31f6fa49310@xs4all.nl>
Date:   Fri, 25 Aug 2023 16:15:52 +0200
From:   Hans Verkuil <hverkuil@...all.nl>
To:     Takashi Iwai <tiwai@...e.de>,
        Shengjiu Wang <shengjiu.wang@...il.com>
Cc:     Mark Brown <broonie@...nel.org>,
        Shengjiu Wang <shengjiu.wang@....com>, sakari.ailus@....fi,
        tfiga@...omium.org, m.szyprowski@...sung.com, mchehab@...nel.org,
        linux-media@...r.kernel.org, linux-kernel@...r.kernel.org,
        Xiubo.Lee@...il.com, festevam@...il.com, nicoleotsuka@...il.com,
        lgirdwood@...il.com, perex@...ex.cz, tiwai@...e.com,
        alsa-devel@...a-project.org, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [RFC PATCH v2 0/7] Add audio support in v4l2 framework

On 25/08/2023 15:54, Takashi Iwai wrote:
> On Fri, 25 Aug 2023 05:46:43 +0200,
> Shengjiu Wang wrote:
>>
>> On Fri, Aug 25, 2023 at 4:21 AM Mark Brown <broonie@...nel.org> wrote:
>>>
>>> On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
>>>> Shengjiu Wang wrote:
>>>
>>>>> But there are several issues:
>>>>> 1. Need to create sound cards.  ASRC module support multi instances, then
>>>>> need to create multi sound cards for each instance.
>>>
>>>> Hm, why can't it be multiple PCM instances instead?
>>>
>>> I'm having a hard time following this one too.
>>>
>>>>> 2. The ASRC is an entirety but with DPCM we need to separate input port and
>>>>> output port to playback substream and capture stream. Synchronous between
>>>>> playback substream and capture substream is a problem.
>>>>> How to start them and stop them at the same time.
>>>
>>>> This could be done by enforcing the full duplex and linking the both
>>>> PCM streams, I suppose.
>>>
>>>>> So shall we make the decision that we can go to the V4L2 solution?
>>>
>>>> Honestly speaking, I don't mind much whether it's implemented in V2L4
>>>> or not -- at least for the kernel part, we can reorganize / refactor
>>>> things internally.  But, the biggest remaining question to me is
>>>> whether this user-space interface is the most suitable one.  Is it
>>>> well defined, usable and maintained for the audio applications?  Or
>>>> is it meant to be a stop-gap for a specific use case?
>>>
>>> I'm having a really hard time summoning much enthusiasm for using v4l
>>> here, it feels like this is heading down the same bodge route as DPCM
>>> but directly as ABI so even harder to fix properly.  That said all the
>>> ALSA APIs are really intended to be used in real time and this sounds
>>> like a non real time application?  I don't fully understand what the
>>> actual use case is here.
>>
>> Thanks for your reply.
>>
>> This asrc memory to memory (memory ->asrc->memory) case is a non
>> real time use case.
>>
>> User fills the input buffer to the asrc module,  after conversion, then asrc
>> sends back the output buffer to user. So it is not a traditional ALSA playback
>> and capture case. I think it is not good to create sound card for it,  it is
>> not a sound card actually.
>>
>> It is a specific use case,  there is no reference in current kernel.
>> v4l2 memory to memory is the closed implementation,  v4l2 current
>> support video, image, radio, tuner, touch devices, so it is not
>> complicated to add support for this specific audio case.
>>
>> Maybe you can go through these patches first.  Because we
>> had implemented the "memory -> asrc ->i2s device-> codec"
>> use case in ALSA.  Now the "memory->asrc->memory" needs
>> to reuse the code in asrc driver, so the first 3 patches is for refining
>> the code to make it can be shared by the "memory->asrc->memory"
>> driver.
>>
>> The main change is in the v4l2 side, A /dev/vl42-audio will be created,
>> user applications only use the ioctl of v4l2 framework.
> 
> Ah, now I'm slowly understanding.  So, what you want is to have an
> interface to perform the batch conversion of a data stream from an
> input to an output?  And ALSA PCM interface doesn't fit fully for that
> purpose because the data handling is batched and it's not like a
> normal PCM streaming?
> 
> Basically the whole M2M arguments are rather subtle.  Those are
> implementation details that can be resolved in several different ways
> in the kernel side.  But the design of the operation is the crucial
> point.
> 
> Maybe we can consider implementing a similar feature in ALSA API, too.
> But it's too far-stretched for now.
> 
> So, if v4l2 interface provides the requested feature (the batched
> audio stream conversion), it's OK to ride on it.

The V4L2 M2M interface is simple: you open a video device and then you can
pass data to the hardware, it processes it and you get the processed data back.

The hardware just processes the data as fast as it can. Each time you open
the video device a new instance is created, and each instance can pass jobs
to the hardware.

Currently it is used for video scalers, deinterlacers, colorspace converters and
codecs, but in the end it is just data in, data out with some job scheduling (fifo)
towards the hardware. So supporting audio using the same core m2m framework wouldn't
be a big deal. We'd probably make a /dev/v4l-audio device for that.

It doesn't come for free: it is a new API, so besides adding support for it, it
also needs to be documented, we would need compliance tests, and very likely I
would want a new virtual driver for this (vim2m.c would be a good template).

Regards,

	Hans

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ