lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Sep 2014 23:20:09 +0000
From:	"Shilimkar, Santosh" <santosh.shilimkar@...com>
To:	Daniel Lezcano <daniel.lezcano@...aro.org>,
	"Menon, Nishanth" <nm@...com>, Tony Lindgren <tony@...mide.com>,
	"Kristo, Tero" <t-kristo@...com>, "Paul Walmsley" <paul@...an.com>
CC:	Kevin Hilman <khilman@...prootsystems.com>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"J, KEERTHY" <j-keerthy@...com>,
	Benoît Cousson <bcousson@...libre.com>
Subject: RE: [PATCH 08/10] ARM: OMAP5/DRA7: PM: cpuidle MPU CSWR support

Sorry for the format. Emailing from webmail.
________________________________________
From: Daniel Lezcano [daniel.lezcano@...aro.org]
Sent: Wednesday, September 17, 2014 2:49 PM
To: Menon, Nishanth; Shilimkar, Santosh; Tony Lindgren; Kristo, Tero; Paul Walmsley
Cc: Kevin Hilman; linux-arm-kernel@...ts.infradead.org; linux-omap@...r.kernel.org; linux-kernel@...r.kernel.org; J, KEERTHY; Benoît Cousson
Subject: Re: [PATCH 08/10] ARM: OMAP5/DRA7: PM: cpuidle MPU CSWR support

On 08/22/2014 07:02 AM, Nishanth Menon wrote:
> From: Santosh Shilimkar <santosh.shilimkar@...com>
>
> Add OMAP5/DRA74/72 CPUIDLE support.
>
> This patch adds MPUSS low power states in cpuidle.
>
>          C1 - CPU0 WFI + CPU1 WFI + MPU ON
>          C2 - CPU0 RET + CPU1 RET + MPU CSWR
>
> Tested on DRA74/72-EVM for C1 and C2 states.
>
> NOTE: DRA7 does not do voltage scaling as part of retention transition
> and has Mercury which speeds up transition paths - Latency numbers are
> based on measurements done by toggling GPIOs.
>
> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@...com>
> [ j-keerthy@...com rework on 3.14]
> Signed-off-by: Keerthy <j-keerthy@...com>
> [nm@...com: updates based on profiling, OMAP5 squashed]
> Signed-off-by: Nishanth Menon <nm@...com>
> ---
>   arch/arm/mach-omap2/cpuidle44xx.c |   82 ++++++++++++++++++++++++++++++++++++-
>   arch/arm/mach-omap2/pm44xx.c      |    2 +-
>   2 files changed, 82 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c
> index 2498ab0..8ad4f44 100644
> --- a/arch/arm/mach-omap2/cpuidle44xx.c
> +++ b/arch/arm/mach-omap2/cpuidle44xx.c
> @@ -22,6 +22,7 @@
>   #include "common.h"
>   #include "pm.h"
>   #include "prm.h"
> +#include "soc.h"
>   #include "clockdomain.h"
>
>   #define MAX_CPUS    2
> @@ -31,6 +32,7 @@ struct idle_statedata {
>       u32 cpu_state;
>       u32 mpu_logic_state;
>       u32 mpu_state;
> +     u32 mpu_state_vote;
>   };
>
>   static struct idle_statedata omap4_idle_data[] = {
> @@ -51,12 +53,26 @@ static struct idle_statedata omap4_idle_data[] = {
>       },
>   };
>
> +static struct idle_statedata dra7_idle_data[] = {
> +     {
> +             .cpu_state = PWRDM_POWER_ON,
> +             .mpu_state = PWRDM_POWER_ON,
> +             .mpu_logic_state = PWRDM_POWER_ON,
> +     },
> +     {
> +             .cpu_state = PWRDM_POWER_RET,
> +             .mpu_state = PWRDM_POWER_RET,
> +             .mpu_logic_state = PWRDM_POWER_RET,
> +     },
> +};
> +
>   static struct powerdomain *mpu_pd, *cpu_pd[MAX_CPUS];
>   static struct clockdomain *cpu_clkdm[MAX_CPUS];
>
>   static atomic_t abort_barrier;
>   static bool cpu_done[MAX_CPUS];
>   static struct idle_statedata *state_ptr = &omap4_idle_data[0];
> +static DEFINE_RAW_SPINLOCK(mpu_lock);
>
>   /* Private functions */
>
> @@ -78,6 +94,32 @@ static int omap_enter_idle_simple(struct cpuidle_device *dev,
>       return index;
>   }
>
> +static int omap_enter_idle_smp(struct cpuidle_device *dev,
> +                            struct cpuidle_driver *drv,
> +                            int index)
> +{
> +     struct idle_statedata *cx = state_ptr + index;
> +     unsigned long flag;
> +
> +     raw_spin_lock_irqsave(&mpu_lock, flag);

Why do you need this spin_lock_irqsave ? Aren't the local irqs already
disabled ?

[Santosh] Actually at one point of time before the idle consolidation the local
irq disable was inside the idle drivers. Now with that moved to core layer,
I think plain spin_lock/unlock() should work.

> +     cx->mpu_state_vote++;
> +     if (cx->mpu_state_vote == num_online_cpus()) {
> +             pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);
> +             omap_set_pwrdm_state(mpu_pd, cx->mpu_state);
> +     }
> +     raw_spin_unlock_irqrestore(&mpu_lock, flag);
> +
> +     omap4_enter_lowpower(dev->cpu, cx->cpu_state);
> +
> +     raw_spin_lock_irqsave(&mpu_lock, flag);
> +     if (cx->mpu_state_vote == num_online_cpus())
> +             omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);
> +     cx->mpu_state_vote--;
> +     raw_spin_unlock_irqrestore(&mpu_lock, flag);

I am not sure that will work. What happens if a cpu exits idle and then
re-enter idle immediately ?

[Santosh] It works and that case is already taken care. CPU exist the idle and then votes
out for cluster state and if it reenters with the right targeted state, the cluster state would
be picked.


Could you try a long run of this little program:

https://git.linaro.org/power/pm-qa.git/blob/HEAD:/cpuidle/cpuidle_killer.c

[Santosh] I am sure there will not be any issue with the long run test case here.
Lets see if Nishant sees anything otherwise

Regards,
Santosh--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ