[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180731135115.GD5719@sirena.org.uk>
Date: Tue, 31 Jul 2018 14:51:15 +0100
From: Mark Brown <broonie@...nel.org>
To: Takashi Iwai <tiwai@...e.de>
Cc: "Agrawal, Akshu" <Akshu.Agrawal@....com>,
Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>,
"moderated list:SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEM..."
<alsa-devel@...a-project.org>, Alexander.Deucher@....com,
djkurtz@...omium.org, Liam Girdwood <lgirdwood@...il.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [alsa-devel] [PATCH] ASoC: soc-pcm: Use delay set in pointer
function
On Tue, Jul 31, 2018 at 03:29:35PM +0200, Takashi Iwai wrote:
> Mark Brown wrote:
> > > > However since it's not supposed to be providing any DMA a CPU DAI really
> > > > shouldn't be doing this...
> > > Well, if so, the CPU dai also cannot get the exact base delay
> > > corresponding to the reported position, either, no?
> > It can know how much delay it's adding internally between its input and
> > output, which feeds into the overall delay experienced by the user.
> But isn't it merely the additional delay that should be applied on top
> of the existing runtime->delay?
Yes. I'm saying that if the CPU DAI thinks it can figure out the base
delay something is confused.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists