[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e857f6cf-0e82-aa2b-9291-6d0c40ac918a@ladisch.de>
Date: Mon, 7 Aug 2017 15:55:25 +0200
From: Clemens Ladisch <clemens@...isch.de>
To: Oleksandr Andrushchenko <andr2000@...il.com>
Cc: alsa-devel@...a-project.org, xen-devel@...ts.xen.org,
linux-kernel@...r.kernel.org,
Oleksandr Andrushchenko <oleksandr_andrushchenko@...m.com>,
tiwai@...e.com
Subject: Re: [alsa-devel] [PATCH 08/11] ALSA: vsnd: Add timer for period
interrupt emulation
Oleksandr Andrushchenko wrote:
> On 08/07/2017 04:11 PM, Clemens Ladisch wrote:
>> How does that interface work?
>
> For the buffer received in .copy_user/.copy_kernel we send
> a request to the backend and get response back (async) when it has copied
> the bytes into HW/mixer/etc, so the buffer at frontend side can be reused.
So if the frontend sends too many (too large) requests, does the
backend wait until there is enough free space in the buffer before
it does the actual copying and then acks?
If yes, then these acks can be used as interrupts. (You still
have to count frames, and call snd_pcm_period_elapsed() exactly
when a period boundary was reached or crossed.)
Splitting a large read/write into smaller requests to the backend
would improve the granularity of the known stream position.
The overall latency would be the sum of the sizes of the frontend
and backend buffers.
Why is the protocol designed this way? Wasn't the goal to expose
some 'real' sound card?
Regards,
Clemens
Powered by blists - more mailing lists