[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB=otbSAWpo0n4rW34C7qvKW6J7BJK6pdqCVJU_AHi0u3hqe-A@mail.gmail.com>
Date: Tue, 26 Jul 2016 04:31:09 +0300
From: Ruslan Bilovol <ruslan.bilovol@...il.com>
To: Clemens Ladisch <clemens@...isch.de>
Cc: Felipe Balbi <balbi@...nel.org>, Daniel Mack <zonque@...il.com>,
Jassi Brar <jassisinghbrar@...il.com>,
"linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/5] USB Audio Gadget refactoring
On Fri, Jul 15, 2016 at 10:43 AM, Clemens Ladisch <clemens@...isch.de> wrote:
>>> On Tue, May 24, 2016 at 2:50 AM, Ruslan Bilovol
>>> <ruslan.bilovol@...il.com> wrote:
>>>> it may break current usecase for some people
>
> And what are the benefits that justify breaking the kernel API?
Main limitation with current f_uac1 design is - it can be used only on systems
with real ALSA card present and can have only exact number of
channels / sampling rate as sink card has.
Yet it is not flexible - can't do audio processing between f_uac1 and the card.
Also if someone wants to bind f_uac1 it to another sound card he has to
unload g_audio or reconfigure it through configfs - that means USB
reenumeration on host device.
If you have a "virtual sound card", audio processing is done in userspace
and is more flexible. You even don't need to have a real sound card and
can use some userspace application for playing/capturing audio samples.
Moreover, existing f_uac2 (that is USB Audio Class 2.0 function
implementation) already uses approach of "virtual sound card"
A real cases when it's required to have UAC1 gadget represented as virtual
sound card on gadget side:
- android accessory f_audio_source.c implementation, android plays audio
directly to UAC1 "virtual sound card"
- some 3G/LTE voice USB sticks with Linux as firmware have user-space
application inside that receives audio from network and need to source/sink
it to some sound card (UAC1)
- USB sound card with complex audio processing inside, made on small
Linux-powered device ("sound-studio")
What's annoying is we have two quite similar USB Audio Classes
implementations(f_uac1 and f_uac2) that have opposite audio
representations: first transfers audio samples directly to real ALSA card,
second exposes virtual ALSA card. That means you have to implement
similar things (capture/playback, etc) in different way. With new design
both implementations provide same "API" (virtual sound card), allowing
us to reuse a lot of code and implement new features much easier (look
at adding of capture support to f_uac1 in PATCH 5/5 - that was very simple
and consists almost from adding new USB descriptors and reusing existing
code from newly created u_audio.c)
Also new USB Audio Gadget design follows existing approach for
another USB Classes:
- serial gadgets use u_serial and expose virtual TTY port
- networking gadgets use u_ether and expose virtual network interface
- uvc gadget exposes virtual v4l device
- midi gadget exposes virtual sound card
- etc, etc
Of course disadvantage of new approach for UAC1 gadget is you need to
use some userspace application for routing audio from virtual to real
sound card, like in case of UAC2 gadget. But thanks to existing
applications like alsaloop it's not difficult nowadays.
The answer I want to get in this RFC - can we drop current f_uac1 approach,
simplify USB Audio Gadget by reusing common code.
Or "we do not break userspace" (or API) and have to live with it forever.
Best regards,
Ruslan
Powered by blists - more mailing lists