[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <164915109789.276574.4185820197463277703.b4-ty@kernel.org>
Date: Tue, 05 Apr 2022 10:31:37 +0100
From: Mark Brown <broonie@...nel.org>
To: Takashi Iwai <tiwai@...e.com>, christophe.jaillet@...adoo.fr,
Liam Girdwood <lgirdwood@...il.com>,
Jaroslav Kysela <perex@...ex.cz>
Cc: kernel-janitors@...r.kernel.org, alsa-devel@...a-project.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] ASoC: soc-pcm: use GFP_KERNEL when the code is sleepable
On Thu, 31 Mar 2022 22:19:44 +0200, Christophe JAILLET wrote:
> At the kzalloc() call in dpcm_be_connect(), there is no spin lock involved.
> It's merely protected by card->pcm_mutex, instead. The spinlock is applied
> at the later call with snd_soc_pcm_stream_lock_irq() only for the list
> manipulations. (See it's *_irq(), not *_irqsave(); that means the context
> being sleepable at that point.) So, we can use GFP_KERNEL safely there.
>
> This patch revert commit d8a9c6e1f676 ("ASoC: soc-pcm: use GFP_ATOMIC for
> dpcm structure") which is no longer needed since commit b7898396f4bb
> ("ASoC: soc-pcm: Fix and cleanup DPCM locking").
>
> [...]
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next
Thanks!
[1/1] ASoC: soc-pcm: use GFP_KERNEL when the code is sleepable
commit: fb6d679fee95d272c0a94912c4e534146823ee89
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
Powered by blists - more mailing lists