[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120410102857.GA22721@tbergstrom-lnx.Nvidia.com>
Date: Tue, 10 Apr 2012 13:28:57 +0300
From: Peter De Schrijver <pdeschrijver@...dia.com>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
CC: "Shilimkar, Santosh" <santosh.shilimkar@...com>,
Kevin Hilman <khilman@...com>, Len Brown <len.brown@...el.com>,
Trinabh Gupta <g.trinabh@...il.com>,
Russell King <linux@....linux.org.uk>,
Stephen Warren <swarren@...dotorg.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Deepthi Dharwar <deepthi@...ux.vnet.ibm.com>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
Colin Cross <ccross@...roid.com>,
Olof Johansson <olof@...om.net>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Arjan van de Ven <arjan@...ux.intel.com>
Subject: Re: [RFC PATCH] cpuidle: allow per cpu latencies
On Fri, Apr 06, 2012 at 05:35:46PM +0200, Daniel Lezcano wrote:
> On 04/06/2012 12:32 PM, Shilimkar, Santosh wrote:
> > Peter,
> >
> > On Thu, Apr 5, 2012 at 7:07 PM, Arjan van de Ven<arjan@...ux.intel.com> wrote:
> >> On 4/5/2012 2:53 AM, Peter De Schrijver wrote:
> >>> This patch doesn't update all cpuidle device registrations. I will do that
> >>
> >> question is if you want to do per cpu latencies, or if you want to have
> >> both types of C state in one big table, and have each of the tegra cpyu
> >> types pick half of them...
> >>
> >>
> > Indeed !! That should work.
> > I thought the C-states are always per CPU based and during the
> > cpuidle registration you can register C-state accordingly based on the
> > specific CPU types with different latencies if needed.
> >
> > Am I missing something ?
>
> That was the case before the cpuidle_state were moved from the
> cpuidle_device to the cpuidle_driver structure [1].
>
> That had the benefit of using a single latencies array instead of
> multiple copy of the same array, which was the case until today.
>
> I looked at the white paper for the tegra3 and understand this is no
> longer true because of the 4-plus-1 architecture [2].
>
The reason is not so much 4-plus-1, but in 4 CPU mode, only CPUs 1 - 3 can
be powergated individually. To turn off CPU0, the external regulator for
the entire cluster is turned off. This means latencies for CPU0 are different
from the other CPUs.
> With the increasing number of SoCs, we have a lot of new cpuidle drivers
> and each time we modify something in the cpuidle core, that impacts all
> the cpuidle drivers.
>
> My feeling is we are going back and forth when patching the cpuidle core
> and may be it is time to define a clear semantic before patching again
> the cpuidle, no ?
>
> What could nice is to have:
>
> * in case of the same latencies for all cpus, use a single array
>
> * in case of different latencies, group the same latencies into a
> single array (I assume this is the case for 4-plus-1, right ?)
>
> May be we can move the cpuidle_state to a per_cpu pointer like
> cpuidle_devices in cpuidle.c and then add:
>
> register_latencies(struct cpuidle_latencies l, int cpu);
>
> If we have the same latencies for all the cpus, then we can register the
> same array, which is only a pointer.
Maybe we also want to make the 'disabled' flag per CPU then or provide some
other way the number of C states can be different per CPU?
Cheers,
Peter.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists