[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <159664899840.1360974.7548807728313161626@swboyd.mtv.corp.google.com>
Date: Wed, 05 Aug 2020 10:36:38 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Bjorn Andersson <bjorn.andersson@...aro.org>,
Douglas Anderson <dianders@...omium.org>
Cc: Alex Elder <elder@...aro.org>, mka@...omium.org,
Douglas Anderson <dianders@...omium.org>,
Andy Gross <agross@...nel.org>, Vinod Koul <vkoul@...nel.org>,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] soc: qcom: aoss: Don't wait for IRQ if we might be in suspend/resume noirq
Quoting Douglas Anderson (2020-08-05 09:16:10)
> Running suspend/resume tests on a sc7180-based board with a modem I
> found that both system suspend and system resume would hang for 1
> second. These messages indicate where:
>
> genpd genpd:0:4080000.remoteproc: calling genpd_suspend_noirq+0x0/0x2c @ 18659, parent: none
> genpd genpd:0:4080000.remoteproc: genpd_suspend_noirq+0x0/0x2c returned 0 after 987917 usecs
>
> Adding a printout, I found that we were working with the power domain
> where "res->pd.name" was "modem".
>
> I found that we were hanging on the wait_event_interruptible_timeout()
> call in qmp_send(). Specifically we'd wait for the whole 1 second
> timeout to hit, then we'd notice that our condition was true and would
> continue on our merry way. Sure enough, I could confirm that
> wait_event_interruptible_timeout() was returning "1" which indicates
> that the condition evaluated to true and we also timed out.
>
> Dumping the stack at the time of the failure made the problem clear.
> Specifically the stack looked like:
> qmp_send+0x1cc/0x210
> qmp_pd_power_toggle+0x90/0xb8
> qmp_pd_power_off+0x20/0x2c
> genpd_sync_power_off+0x80/0x12c
> genpd_finish_suspend+0xd8/0x108
> genpd_suspend_noirq+0x20/0x2c
> dpm_run_callback+0xe0/0x1d4
> __device_suspend_noirq+0xfc/0x200
> dpm_suspend_noirq+0x174/0x3bc
> suspend_devices_and_enter+0x198/0x8a0
> pm_suspend+0x550/0x6f4
> As you can see we're running from the "noirq" callback. Looking at
> what was supposed to wake us up, it was qmp_intr() (our IRQ handler).
> Doh!
Why is the genpd being powered off at all? It looks like the driver is
written in a way that it doesn't expect this to happen. See where
adsp_pds_disable() is called from. Looks like the remoteproc "stop"
callback should be called or the driver should be detached.
It sort of looks like the genpd is expected to be at the max level all
the time (it sets INT_MAX in adsp_pds_enable(), cool). Maybe we need to
add some sort of suspend hooks to the remote proc driver instead? Where
those suspend hooks are called earlier and drop the genpd performance
state request but otherwise leave it enabled across suspend?
I know this isn't clearing the land mine that is calling this code from
noirq phase of suspend, but I'm just looking at the driver and thinking
that it never expected to be called from this phase of suspend to begin
with.
>
> I believe that the correct fix here is to assume that our power_off /
> power_on functions might be called at "noirq" time and just always
> poll if we're called via that path. Other paths can continue to wait
> for the IRQ.
>
> Fixes: 2209481409b7 ("soc: qcom: Add AOSS QMP driver")
> Signed-off-by: Douglas Anderson <dianders@...omium.org>
> ---
> This problem was observed on the Chrome OS 5.4 tree which has some
> extra patches in it compared to mainline. The top of the current
> Qualcomm tree didn't have the delay, but that's probably because
> everything isn't fully enabled there yet. I at least confirmed that
> this patch doesn't actually _break_ anything on mainline, though.
Powered by blists - more mailing lists