lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Jan 2012 09:32:17 -0800
From:	Colin Cross <ccross@...roid.com>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Kevin Hilman <khilman@...com>, Len Brown <len.brown@...el.com>,
	linux-kernel@...r.kernel.org,
	Amit Kucheria <amit.kucheria@...aro.org>,
	linux-tegra@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
	linux-omap@...r.kernel.org,
	Arjan van de Ven <arjan@...ux.intel.com>,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [linux-pm] [PATCH 0/3] coupled cpuidle state support

On Fri, Jan 27, 2012 at 12:54 AM, Vincent Guittot
<vincent.guittot@...aro.org> wrote:
> On 20 January 2012 21:40, Colin Cross <ccross@...roid.com> wrote:
>> On Fri, Jan 20, 2012 at 12:46 AM, Daniel Lezcano
>> <daniel.lezcano@...aro.org> wrote:
>>> Hi Colin,
>>>
>>> this patchset could be interesting to resolve in a generic way the cpu
>>> dependencies.
>>> What is the status of this patchset ?
>>
>> I can't do much with it right now, because I don't have any devices
>> that can do SMP idle with a v3.2 kernel.  I've started working on an
>> updated version that avoids the spinlock, but it might be a while
>> before I can test and post it.  I'm mostly looking for feedback on the
>> approach taken in this patch, and whether it will be useful for other
>> SoCs besides Tegra and OMAP4.
>>
>
> Hi Colin,
>
> In your patch, you put in safe state (WFI for most of platform) the
> cpus that become idle and these cpus are woken up each time a new cpu
> of the cluster becomes idle. Then, the cluster state is chosen and the
> cpus enter the selected C-state. On ux500, we are using another
> behavior for synchronizing  the cpus. The cpus are prepared to enter
> the c-state that has been chosen by the governor and the last cpu,
> that enters idle, chooses the final cluster state (according to cpus'
> C-state). The main advantage of this solution is that you don't need
> to wake other cpus to enter the C-state of a cluster. This can be
> quite worth full when tasks mainly run on one cpu. Have you also think
> about such behavior when developing the coupled cpuidle driver ? It
> could be interesting to add such behavior.

Waking up the cpus that are in the safe state is not done just to
choose the target state, it's done to allow the cpus to take
themselves to the target low power state.  On ux500, are you saying
you take the cpus directly from the safe state to a lower power state
without ever going back to the active state?  I once implemented Tegra
that way, and it required lots of nasty synchronization to prevent
resetting the cpu at the same time that it was booting due to an
interrupt, and I was later told that Tegra can't handle that sequence
at all, although I haven't verified it yet.

On platforms that can't turn the cpus off in a random order, or that
can't take a cpu directly from the safe state to the target state,
something like these coupled cpuidle patches are required.  On
platforms that can, the low power modes can be implemented without
these patches, although it is very hard to do without race conditions.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ