lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 18 Jun 2012 19:00:00 +0530
From:	a0393909 <santosh.shilimkar@...com>
To:	Daniel Lezcano <daniel.lezcano@...aro.org>
CC:	linux-acpi@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
	Lists Linaro-dev <linaro-dev@...ts.linaro.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Kevin Hilman <khilman@...com>,
	Peter De Schrijver <pdeschrijver@...dia.com>,
	Amit Kucheria <amit.kucheria@...aro.org>,
	linux-next@...r.kernel.org, Colin Cross <ccross@...roid.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Rob Lee <rob.lee@...aro.org>
Subject: Re: [linux-pm] cpuidle future and improvements

Daniel,

On 06/18/2012 02:10 PM, Daniel Lezcano wrote:
>
> Dear all,
>
> A few weeks ago, Peter De Schrijver proposed a patch [1] to allow per
> cpu latencies. We had a discussion about this patchset because it
> reverse the modifications Deepthi did some months ago [2] and we may
> want to provide a different implementation.
>
> The Linaro Connect [3] event bring us the opportunity to meet people
> involved in the power management and the cpuidle area for different SoC.
>
> With the Tegra3 and big.LITTLE architecture, making per cpu latencies
> for cpuidle is vital.
>
> Also, the SoC vendors would like to have the ability to tune their cpu
> latencies through the device tree.
>
> We agreed in the following steps:
>
> 1. factor out / cleanup the cpuidle code as much as possible
> 2. better sharing of code amongst SoC idle drivers by moving common bits
> to core code
> 3. make the cpuidle_state structure contain only data
> 4. add a API to register latencies per cpu
>
> These four steps impacts all the architecture. I began the factor out
> code / cleanup [4] and that has been accepted upstream and I proposed
> some modifications [5] but I had a very few answers.
>
Another thing which we discussed is bringing the CPU cluster/package
notion in the core idle code. Couple idle did bring that idea to some
extent but in can be further extended and absratcted. Atm, most of
the work is done in back-end cpuidle drivers which can be easily
abstracted if the "cluster idle" notion is supported in the core layer.

Per CPU __and__ per operating point(OPP), latency is something which
can be also added to the list. From the discussion I remember, it
matters for few SoCs and can be beneficial.

Regards
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ