[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <161073473698.12268.1646614149546970077.b4-ty@kernel.org>
Date: Fri, 15 Jan 2021 18:18:56 +0000
From: Mark Brown <broonie@...nel.org>
To: alsa-devel@...a-project.org,
Bard Liao <yung-chuan.liao@...ux.intel.com>, vkoul@...nel.org
Cc: srinivas.kandagatla@...aro.org, jank@...ence.com,
hui.wang@...onical.com, rander.wang@...ux.intel.com,
vinod.koul@...aro.org, tiwai@...e.de,
ranjani.sridharan@...ux.intel.com, gregkh@...uxfoundation.org,
pierre-louis.bossart@...ux.intel.com, sanyog.r.kale@...el.com,
bard.liao@...el.com, linux-kernel@...r.kernel.org
Subject: Re: (subset) [PATCH 0/2] ASoC/SoundWire: fix timeout values
On Fri, 15 Jan 2021 14:16:49 +0800, Bard Liao wrote:
> The timeout for an individual transaction w/ the Cadence IP is the same as
> the entire resume operation for codecs.
> This doesn't make sense, we need to have at least one order of magnitude
> between individual transactions and the entire resume operation.
>
> Set the timeout on the Cadence side to 500ms and 5s for the codec resume.
>
> [...]
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next
Thanks!
[1/2] ASoC: codecs: soundwire: increase resume timeout
commit: 7ef8c9edc86cff0881b2eb9a3274796258fbd872
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
Powered by blists - more mailing lists