[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52CADBB9.1010704@linux.intel.com>
Date: Mon, 06 Jan 2014 08:37:13 -0800
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>
CC: Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, mingo@...nel.org, pjt@...gle.com,
Morten.Rasmussen@....com, cmetcalf@...era.com, tony.luck@...el.com,
alex.shi@...aro.org, linaro-kernel@...ts.linaro.org, rjw@...k.pl,
paulmck@...ux.vnet.ibm.com, corbet@....net, tglx@...utronix.de,
len.brown@...el.com, amit.kucheria@...aro.org,
james.hogan@...tec.com, schwidefsky@...ibm.com,
heiko.carstens@...ibm.com, Dietmar.Eggemann@....com
Subject: Re: [RFC] sched: CPU topology try
On 1/6/2014 8:33 AM, Peter Zijlstra wrote:
> On Wed, Jan 01, 2014 at 10:30:33AM +0530, Preeti U Murthy wrote:
>> The design looks good to me. In my opinion information like P-states and
>> C-states dependency can be kept separate from the topology levels, it
>> might get too complicated unless the information is tightly coupled to
>> the topology.
>
> I'm not entirely convinced we can keep them separated, the moment we
> have multiple CPUs sharing a P or C state we need somewhere to manage
> the shared state and the domain tree seems like the most natural place
> for this.
>
> Now it might well be both P and C states operate at 'natural' domains
> which we already have so it might be 'easy'.
more than that though.. P and C state sharing is mostly hidden from the OS
(because the OS does not have the ability to do this; e.g. there are things
that do "if THIS cpu goes idle, the OTHER cpu P state changes automatic".
that's not just on x86, the ARM guys (iirc at least the latest snapdragon) are going in that
direction as well.....
for those systems, the OS really should just make local decisions and let the hardware
cope with hardware grouping.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists