[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMbhsRRD2bdkcUZvScb-cF05e=R3h69bNVaTPFX4jBKBBOjuMg@mail.gmail.com>
Date: Tue, 13 Mar 2012 17:47:20 -0700
From: Colin Cross <ccross@...roid.com>
To: Kevin Hilman <khilman@...com>
Cc: Daniel Lezcano <daniel.lezcano@...aro.org>,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-pm@...ts.linux-foundation.org,
Len Brown <len.brown@...el.com>,
Santosh Shilimkar <santosh.shilimkar@...com>,
Amit Kucheria <amit.kucheria@...aro.org>,
Arjan van de Ven <arjan@...ux.intel.com>,
Trinabh Gupta <g.trinabh@...il.com>,
Deepthi Dharwar <deepthi@...ux.vnet.ibm.com>,
linux-omap@...r.kernel.org, linux-tegra@...r.kernel.org
Subject: Re: [PATCH 0/3] coupled cpuidle state support
On Tue, Mar 13, 2012 at 5:28 PM, Colin Cross <ccross@...roid.com> wrote:
> On Tue, Mar 13, 2012 at 4:52 PM, Kevin Hilman <khilman@...com> wrote:
>> Hi Colin,
>>
>> On 12/21/2011 01:09 AM, Colin Cross wrote:
>>
>>> To use coupled cpuidle states, a cpuidle driver must:
>>
>> [...]
>>
>>> Provide a struct cpuidle_state.enter function for each state
>>> that affects multiple cpus. This function is guaranteed to be
>>> called on all cpus at approximately the same time. The driver
>>> should ensure that the cpus all abort together if any cpu tries
>>> to abort once the function is called.
>>
>> I've discoved the last sentence above is crucial, and in order to catch
>> all the corner cases I found it useful to have the struct
>> cpuidle_coupled in cpuidle.h so that the driver can check ready_count
>> itself (patch below, on top of $SUBJECT series.)
>
> ready_count is an internal state of core coupled code, and will change
> significantly in the next version of the patches. Drivers cannot
> depend on it.
>
>> As you know, on OMAP4, when entering the coupled state, CPU0 has to wait
>> for CPU1 to enter its low power state before entering itself. The first
>> pass at implementing this was to just spin waiting for the powerdomain
>> of CPU1 to hit off. That works... most of the time.
>>
>> If CPU1 wakes up immediately (or before CPU0 starts checking), or more
>> likely, fails to hit the low-power state because of other hardware
>> "conditions", CPU0 will end up stuck in the loop waiting for CPU1.
>>
>> To solve this, in addition to checking the power state of CPU1, I also
>> check if (coupled->ready_count != cpumask_weight(&coupled->alive_coupled_cpus)).
>> If true, it means that CPU1 has already exited/aborted so CPU0 had
>> better abort as well.
>>
>> Checking the ready_count seemed like an easy way to do this, but did you
>> have any other mechanisms in mind for CPUs to communicate that they've
>> exited/aborted?
>
> Why not set a flag from CPU1 when it exits the low power state, and
> have CPU0 spin on the powerdomain register or the flag? You can then
> use the parallel barrier function to ensure both cpus have seen the
> flag and reset it to 0 before returning.
I realized the parallel barrier helper was not included in the patch
set I posted, it will be in the next patch set. Short version, no
caller to cpuidle_coupled_parallel_barrier will return before all cpus
in the coupled set have called it. It allows you to resynchronize the
cpus after an abort to ensure they have all seen the abort flag before
clearing it and returning, leaving everything in the correct state for
the next idle attempt.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists