lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Oct 2018 22:52:51 +0200
From:   "Rafael J. Wysocki" <rjw@...ysocki.net>
To:     "Raju P.L.S.S.S.N" <rplsssn@...eaurora.org>
Cc:     andy.gross@...aro.org, david.brown@...aro.org,
        ulf.hansson@...aro.org, khilman@...nel.org,
        linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
        rnayak@...eaurora.org, bjorn.andersson@...aro.org,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        devicetree@...r.kernel.org, sboyd@...nel.org, evgreen@...omium.org,
        dianders@...omium.org, mka@...omium.org, ilina@...eaurora.org
Subject: Re: [PATCH RFC v1 2/8] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs

On Wednesday, October 10, 2018 11:20:49 PM CEST Raju P.L.S.S.S.N wrote:
> From: Ulf Hansson <ulf.hansson@...aro.org>
> 
> To allow CPUs being power managed by PM domains, let's deploy support for
> runtime PM for the CPU's corresponding struct device.
> 
> More precisely, at the point when the CPU is about to enter an idle state,
> decrease the runtime PM usage count for its corresponding struct device,
> via calling pm_runtime_put_sync_suspend(). Then, at the point when the CPU
> resumes from idle, let's increase the runtime PM usage count, via calling
> pm_runtime_get_sync().
> 
> Cc: Lina Iyer <ilina@...eaurora.org>
> Co-developed-by: Lina Iyer <lina.iyer@...aro.org>
> Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@...eaurora.org>
> (am from https://patchwork.kernel.org/patch/10478153/)
> ---
>  kernel/cpu_pm.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
> index 67b02e1..492d4a8 100644
> --- a/kernel/cpu_pm.c
> +++ b/kernel/cpu_pm.c
> @@ -16,9 +16,11 @@
>   */
>  
>  #include <linux/kernel.h>
> +#include <linux/cpu.h>
>  #include <linux/cpu_pm.h>
>  #include <linux/module.h>
>  #include <linux/notifier.h>
> +#include <linux/pm_runtime.h>
>  #include <linux/spinlock.h>
>  #include <linux/syscore_ops.h>
>  
> @@ -91,6 +93,7 @@ int cpu_pm_enter(void)
>  {
>  	int nr_calls;
>  	int ret = 0;
> +	struct device *dev = get_cpu_device(smp_processor_id());
>  
>  	ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
>  	if (ret)
> @@ -100,6 +103,9 @@ int cpu_pm_enter(void)
>  		 */
>  		cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
>  
> +	if (!ret && dev && dev->pm_domain)
> +		pm_runtime_put_sync_suspend(dev);

This may cause a power domain to go off, but if it goes off, then the idle
governor has already selected idle states for all of the CPUs in that domain.

Is there any way to ensure that turning the domain off (and later on) will
no cause the target residency and exit latency expectations for those idle
states to be exceeded?

> +
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(cpu_pm_enter);
> @@ -118,6 +124,11 @@ int cpu_pm_enter(void)
>   */
>  int cpu_pm_exit(void)
>  {
> +	struct device *dev = get_cpu_device(smp_processor_id());
> +
> +	if (dev && dev->pm_domain)
> +		pm_runtime_get_sync(dev);
> +
>  	return cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
>  }
>  EXPORT_SYMBOL_GPL(cpu_pm_exit);
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ