[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230912-msm8909-cpufreq-v1-1-767ce66b544b@kernkonzept.com>
Date: Tue, 12 Sep 2023 11:40:15 +0200
From: Stephan Gerhold <stephan.gerhold@...nkonzept.com>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Andy Gross <agross@...nel.org>,
Bjorn Andersson <andersson@...nel.org>,
Konrad Dybcio <konrad.dybcio@...aro.org>,
Ilia Lin <ilia.lin@...nel.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Conor Dooley <conor+dt@...nel.org>, linux-pm@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
devicetree@...r.kernel.org, Ulf Hansson <ulf.hansson@...aro.org>,
Stephan Gerhold <stephan.gerhold@...nkonzept.com>,
stable@...r.kernel.org
Subject: [PATCH 1/4] cpufreq: qcom-nvmem: Enable virtual power domain
devices
The genpd core ignores performance state votes from devices that are
runtime suspended as of commit 5937c3ce2122 ("PM: domains: Drop/restore
performance state votes for devices at runtime PM"). However, at the
moment nothing ever enables the virtual devices created in
qcom-cpufreq-nvmem for the cpufreq power domain scaling, so they are
permanently runtime-suspended.
Fix this by enabling the devices after attaching them and use
dev_pm_syscore_device() to ensure the power domain also stays on when
going to suspend. Since it supplies the CPU we can never turn it off
from Linux. There are other mechanisms to turn it off when needed,
usually in the RPM firmware or the cpuidle path.
Without this fix performance states votes are silently ignored, and the
CPU/CPR voltage is never adjusted. This has been broken since 5.14 but
for some reason no one noticed this on QCS404 so far.
Cc: stable@...r.kernel.org
Fixes: 1cb8339ca225 ("cpufreq: qcom: Add support for qcs404 on nvmem driver")
Signed-off-by: Stephan Gerhold <stephan.gerhold@...nkonzept.com>
---
drivers/cpufreq/qcom-cpufreq-nvmem.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
index 84d7033e5efe..17d6ab14c909 100644
--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
+++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
@@ -25,6 +25,7 @@
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
+#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smem.h>
@@ -280,6 +281,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
}
for_each_possible_cpu(cpu) {
+ struct device **virt_devs = NULL;
struct dev_pm_opp_config config = {
.supported_hw = NULL,
};
@@ -300,7 +302,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
if (drv->data->genpd_names) {
config.genpd_names = drv->data->genpd_names;
- config.virt_devs = NULL;
+ config.virt_devs = &virt_devs;
}
if (config.supported_hw || config.genpd_names) {
@@ -311,6 +313,23 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
goto free_opp;
}
}
+
+ if (virt_devs) {
+ const char * const *name = config.genpd_names;
+ int i;
+
+ for (i = 0; *name; i++, name++) {
+ ret = pm_runtime_resume_and_get(virt_devs[i]);
+ if (ret) {
+ dev_err(cpu_dev, "failed to resume %s: %d\n",
+ *name, ret);
+ goto free_opp;
+ }
+
+ /* Keep CPU power domain always-on */
+ dev_pm_syscore_device(virt_devs[i], true);
+ }
+ }
}
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
--
2.39.2
Powered by blists - more mailing lists