[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2101051354300.864696@eliteleevi.tm.intel.com>
Date: Tue, 5 Jan 2021 14:25:43 +0200 (EET)
From: Kai Vehmanen <kai.vehmanen@...ux.intel.com>
To: Kai-Heng Feng <kai.heng.feng@...onical.com>
cc: pierre-louis.bossart@...ux.intel.com, lgirdwood@...il.com,
ranjani.sridharan@...ux.intel.com, kai.vehmanen@...ux.intel.com,
daniel.baluta@....com, Mark Brown <broonie@...nel.org>,
Jaroslav Kysela <perex@...ex.cz>,
Takashi Iwai <tiwai@...e.com>,
Keyon Jie <yang.jie@...ux.intel.com>,
Kuninori Morimoto <kuninori.morimoto.gx@...esas.com>,
Marcin Rajwa <marcin.rajwa@...ux.intel.com>,
Payal Kshirsagar <payalskshirsagar1234@...il.com>,
"moderated list:SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS"
<sound-open-firmware@...a-project.org>,
"moderated list:SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEM..."
<alsa-devel@...a-project.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 3/3] ASoC: SOF: Intel: hda: Avoid checking jack on
system suspend
Hey,
On Mon, 4 Jan 2021, Kai-Heng Feng wrote:
> System takes a very long time to suspend after commit 215a22ed31a1
> ("ALSA: hda: Refactor codec PM to use direct-complete optimization"):
> [ 90.065964] PM: suspend entry (s2idle)
the patch itself looks good, but can you explain a bit more in what
conditions you hit the delay?
I tried to reproduce the delay on multiple systems (with tip of
tiwai/master), but with no luck. I can see hda_jackpoll_work() called, but
at this point runtime pm has been disabled already (via
__device_suspend()) and snd_hdac_is_power_on() will return true even when
pm_runtime_suspended() is true as well (which is expected as runtime-pm is
disabled at this point for system suspend). End result is codec is not
powered up in hda_jackpoll_work() and suspend is not delayed.
The patch still seems correct. You would hit the problem you describe if
jackpoll_interval was set to a non-zero value (not the case on most
systems supported by SOF, but still a possibility). I'm still curious how
you hit the problem. At minimum, we are missing a scenario in our testing.
Br, Kai
Powered by blists - more mailing lists