[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220513142227.717010807@linuxfoundation.org>
Date: Fri, 13 May 2022 16:23:27 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jaroslav Kysela <perex@...ex.cz>, Takashi Iwai <tiwai@...e.de>,
Ovidiu Panait <ovidiu.panait@...driver.com>
Subject: [PATCH 4.14 11/14] ALSA: pcm: Fix races among concurrent prepare and hw_params/hw_free calls
From: Takashi Iwai <tiwai@...e.de>
commit 3c3201f8c7bb77eb53b08a3ca8d9a4ddc500b4c0 upstream.
Like the previous fixes to hw_params and hw_free ioctl races, we need
to paper over the concurrent prepare ioctl calls against hw_params and
hw_free, too.
This patch implements the locking with the existing
runtime->buffer_mutex for prepare ioctls. Unlike the previous case
for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is
performed to the linked streams, hence the lock can't be applied
simply on the top. For tracking the lock in each linked substream, we
modify snd_pcm_action_group() slightly and apply the buffer_mutex for
the case stream_lock=false (formerly there was no lock applied)
there.
Cc: <stable@...r.kernel.org>
Reviewed-by: Jaroslav Kysela <perex@...ex.cz>
Link: https://lore.kernel.org/r/20220322170720.3529-4-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@...e.de>
[OP: backport to 4.14: adjusted context]
Signed-off-by: Ovidiu Panait <ovidiu.panait@...driver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
sound/core/pcm_native.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -1046,15 +1046,17 @@ struct action_ops {
*/
static int snd_pcm_action_group(const struct action_ops *ops,
struct snd_pcm_substream *substream,
- int state, int do_lock)
+ int state, int stream_lock)
{
struct snd_pcm_substream *s = NULL;
struct snd_pcm_substream *s1;
int res = 0, depth = 1;
snd_pcm_group_for_each_entry(s, substream) {
- if (do_lock && s != substream) {
- if (s->pcm->nonatomic)
+ if (s != substream) {
+ if (!stream_lock)
+ mutex_lock_nested(&s->runtime->buffer_mutex, depth);
+ else if (s->pcm->nonatomic)
mutex_lock_nested(&s->self_group.mutex, depth);
else
spin_lock_nested(&s->self_group.lock, depth);
@@ -1082,18 +1084,18 @@ static int snd_pcm_action_group(const st
ops->post_action(s, state);
}
_unlock:
- if (do_lock) {
- /* unlock streams */
- snd_pcm_group_for_each_entry(s1, substream) {
- if (s1 != substream) {
- if (s1->pcm->nonatomic)
- mutex_unlock(&s1->self_group.mutex);
- else
- spin_unlock(&s1->self_group.lock);
- }
- if (s1 == s) /* end */
- break;
+ /* unlock streams */
+ snd_pcm_group_for_each_entry(s1, substream) {
+ if (s1 != substream) {
+ if (!stream_lock)
+ mutex_unlock(&s1->runtime->buffer_mutex);
+ else if (s1->pcm->nonatomic)
+ mutex_unlock(&s1->self_group.mutex);
+ else
+ spin_unlock(&s1->self_group.lock);
}
+ if (s1 == s) /* end */
+ break;
}
return res;
}
@@ -1174,10 +1176,12 @@ static int snd_pcm_action_nonatomic(cons
int res;
down_read(&snd_pcm_link_rwsem);
+ mutex_lock(&substream->runtime->buffer_mutex);
if (snd_pcm_stream_linked(substream))
res = snd_pcm_action_group(ops, substream, state, 0);
else
res = snd_pcm_action_single(ops, substream, state);
+ mutex_unlock(&substream->runtime->buffer_mutex);
up_read(&snd_pcm_link_rwsem);
return res;
}
Powered by blists - more mailing lists