[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5370E026.5040105@linaro.org>
Date: Mon, 12 May 2014 16:52:22 +0200
From: Daniel Lezcano <daniel.lezcano@...aro.org>
To: "Li, Aubrey" <aubrey.li@...ux.intel.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux PM list <linux-pm@...r.kernel.org>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Zhang Rui <rui.zhang@...el.com>,
Aubrey Li <aubrey.li@...el.com>
Subject: Re: [PATCH] PM / suspend: Always use deepest C-state in the "freeze"
sleep state
On 05/12/2014 04:19 PM, Li, Aubrey wrote:
> On 2014/5/12 22:08, Daniel Lezcano wrote:
>> On 05/05/2014 12:51 AM, Rafael J. Wysocki wrote:
>>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>>
>>> If freeze_enter() is called, we want to bypass the current cpuidle
>>> governor and always use the deepest available (that is, not disabled)
>>> C-state, because we want to save as much energy as reasonably possible
>>> then and runtime latency constraints don't matter at that point, since
>>> the system is in a sleep state anyway.
>>>
>>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>> ---
>>>
>>> This is on top of https://patchwork.kernel.org/patch/4071541/ .
>>>
>>
>> Wouldn't make sense to revisit play_dead instead ?
>>
> play_dead() is broken.
>
> Even if it works, we still should rely on cpuidle driver to place the
> CPUs into the deepest c-state, because there is no architectural way to
> enter deepest c-state and what play_dead() does is a bad assumption.
Ok, let me rephrase it. Why not revisiting cpuidle_play_dead instead ?
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists