[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230511120841.2096524-6-yixuanjiang@google.com>
Date: Thu, 11 May 2023 20:08:40 +0800
From: yixuanjiang <yixuanjiang@...gle.com>
To: tiwai@...e.com, lgirdwood@...il.com, broonie@...nel.org
Cc: linux-kernel@...r.kernel.org, alsa-devel@...a-project.org,
Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>,
Kai Vehmanen <kai.vehmanen@...ux.intel.com>,
Bard Liao <yung-chuan.liao@...ux.intel.com>,
Ranjani Sridharan <ranjani.sridharan@...ux.intel.com>,
Yixuan Jiang <yixuanjiang@...gle.com>, stable@...r.kernel.org
Subject: [PATCH 5/6] ASoC: soc-pcm: test refcount before triggering
From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
[ Upstream commit 848aedfdc6ba25ad5652797db9266007773e44dd ]
On start/pause_release/resume, when more than one FE is connected to
the same BE, it's possible that the trigger is sent more than
once. This is not desirable, we only want to trigger a BE once, which
is straightforward to implement with a refcount.
For stop/pause/suspend, the problem is more complicated: the check
implemented in snd_soc_dpcm_can_be_free_stop() may fail due to a
conceptual deadlock when we trigger the BE before the FE. In this
case, the FE states have not yet changed, so there are corner cases
where the TRIGGER_STOP is never sent - the dual case of start where
multiple triggers might be sent.
This patch suggests an unconditional trigger in all cases, without
checking the FE states, using a refcount protected by the BE PCM
stream lock.
Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
Reviewed-by: Kai Vehmanen <kai.vehmanen@...ux.intel.com>
Reviewed-by: Bard Liao <yung-chuan.liao@...ux.intel.com>
Reviewed-by: Ranjani Sridharan <ranjani.sridharan@...ux.intel.com>
Link: https://lore.kernel.org/r/20211207173745.15850-6-pierre-louis.bossart@linux.intel.com
Signed-off-by: Mark Brown <broonie@...nel.org>
Fixes: aa9ff6a4955f ("ASoC: soc-compress: Reposition and add pcm_mutex")
Signed-off-by: Yixuan Jiang <yixuanjiang@...gle.com>
Cc: stable@...r.kernel.org # 5.15+
---
include/sound/soc-dpcm.h | 2 ++
sound/soc/soc-pcm.c | 53 +++++++++++++++++++++++++++++++---------
2 files changed, 44 insertions(+), 11 deletions(-)
diff --git a/include/sound/soc-dpcm.h b/include/sound/soc-dpcm.h
index e296a3949b18b..d963f3b608489 100644
--- a/include/sound/soc-dpcm.h
+++ b/include/sound/soc-dpcm.h
@@ -101,6 +101,8 @@ struct snd_soc_dpcm_runtime {
enum snd_soc_dpcm_state state;
int trigger_pending; /* trigger cmd + 1 if pending, 0 if not */
+
+ int be_start; /* refcount protected by BE stream pcm lock */
};
#define for_each_dpcm_fe(be, stream, _dpcm) \
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 7903516c89a6a..b6099d36518f5 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -1630,7 +1630,7 @@ int dpcm_be_dai_startup(struct snd_soc_pcm_runtime *fe, int stream)
be->dpcm[stream].state = SND_SOC_DPCM_STATE_CLOSE;
goto unwind;
}
-
+ be->dpcm[stream].be_start = 0;
be->dpcm[stream].state = SND_SOC_DPCM_STATE_OPEN;
count++;
}
@@ -2116,14 +2116,21 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
- if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
+ if (!be->dpcm[stream].be_start &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
goto next;
+ be->dpcm[stream].be_start++;
+ if (be->dpcm[stream].be_start != 1)
+ goto next;
+
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ be->dpcm[stream].be_start--;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
break;
@@ -2131,9 +2138,15 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
goto next;
+ be->dpcm[stream].be_start++;
+ if (be->dpcm[stream].be_start != 1)
+ goto next;
+
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ be->dpcm[stream].be_start--;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
break;
@@ -2141,9 +2154,15 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
goto next;
+ be->dpcm[stream].be_start++;
+ if (be->dpcm[stream].be_start != 1)
+ goto next;
+
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ be->dpcm[stream].be_start--;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
break;
@@ -2152,12 +2171,18 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
goto next;
- if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ if (be->dpcm[stream].state == SND_SOC_DPCM_STATE_START)
+ be->dpcm[stream].be_start--;
+
+ if (be->dpcm[stream].be_start != 0)
goto next;
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ if (be->dpcm[stream].state == SND_SOC_DPCM_STATE_START)
+ be->dpcm[stream].be_start++;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
break;
@@ -2165,12 +2190,15 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
goto next;
- if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ be->dpcm[stream].be_start--;
+ if (be->dpcm[stream].be_start != 0)
goto next;
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ be->dpcm[stream].be_start++;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_SUSPEND;
break;
@@ -2178,12 +2206,15 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
goto next;
- if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ be->dpcm[stream].be_start--;
+ if (be->dpcm[stream].be_start != 0)
goto next;
ret = soc_pcm_trigger(be_substream, cmd);
- if (ret)
+ if (ret) {
+ be->dpcm[stream].be_start++;
goto next;
+ }
be->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
break;
--
2.40.1.521.gf1e218fcd8-goog
Powered by blists - more mailing lists