[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1521046715-30683-10-git-send-email-ulf.hansson@linaro.org>
Date: Wed, 14 Mar 2018 17:58:19 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: "Rafael J . Wysocki" <rjw@...ysocki.net>,
Sudeep Holla <sudeep.holla@....com>,
Lorenzo Pieralisi <Lorenzo.Pieralisi@....com>,
linux-pm@...r.kernel.org
Cc: Kevin Hilman <khilman@...nel.org>,
Lina Iyer <ilina@...eaurora.org>,
Lina Iyer <lina.iyer@...aro.org>,
Ulf Hansson <ulf.hansson@...aro.org>,
Rob Herring <robh+dt@...nel.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vincent Guittot <vincent.guittot@...aro.org>,
Stephen Boyd <sboyd@...nel.org>,
Juri Lelli <juri.lelli@....com>,
Geert Uytterhoeven <geert+renesas@...der.be>,
linux-arm-kernel@...ts.infradead.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v6 09/25] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs
To allow CPUs being power managed by PM domains, let's deploy support for
runtime PM for the CPU's corresponding struct device.
More precisely, at the point when the CPU is about to enter an idle state,
decrease the runtime PM usage count for its corresponding struct device,
via calling pm_runtime_put_sync_suspend(). Then, at the point when the CPU
resumes from idle, let's increase the runtime PM usage count, via calling
pm_runtime_get_sync().
Cc: Lina Iyer <ilina@...eaurora.org>
Co-developed-by: Lina Iyer <lina.iyer@...aro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
---
kernel/cpu_pm.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 67b02e1..71317ff 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -16,9 +16,11 @@
*/
#include <linux/kernel.h>
+#include <linux/cpu.h>
#include <linux/cpu_pm.h>
#include <linux/module.h>
#include <linux/notifier.h>
+#include <linux/pm_runtime.h>
#include <linux/spinlock.h>
#include <linux/syscore_ops.h>
@@ -91,6 +93,7 @@ int cpu_pm_enter(void)
{
int nr_calls;
int ret = 0;
+ struct device *dev = get_cpu_device(smp_processor_id());
ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
if (ret)
@@ -100,6 +103,9 @@ int cpu_pm_enter(void)
*/
cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
+ if (!ret && dev)
+ pm_runtime_put_sync_suspend(dev);
+
return ret;
}
EXPORT_SYMBOL_GPL(cpu_pm_enter);
@@ -118,6 +124,11 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
*/
int cpu_pm_exit(void)
{
+ struct device *dev = get_cpu_device(smp_processor_id());
+
+ if (dev)
+ pm_runtime_get_sync(dev);
+
return cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
}
EXPORT_SYMBOL_GPL(cpu_pm_exit);
--
2.7.4
Powered by blists - more mailing lists