[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <502C4A3F.5040405@linux.intel.com>
Date: Wed, 15 Aug 2012 18:17:51 -0700
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Rik van Riel <riel@...hat.com>
CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Alex Shi <alex.shi@...el.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
vincent.guittot@...aro.org, svaidy@...ux.vnet.ibm.com,
Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>
Subject: Re: [discussion]sched: a rough proposal to enable power saving in
scheduler
On 8/15/2012 6:14 PM, Rik van Riel wrote:
> On 08/15/2012 10:43 AM, Arjan van de Ven wrote:
>
>> The easy cop-out is provide the sysadmin a slider.
>> The slightly less easy one is to (and we're taking this approach
>> in the new P state code we're working on) say "in the default
>> setting, we're going to sacrifice up to 5% performance from peak
>> to give you the best power savings within that performance loss budget"
>> (with a slider that can give you 0%, 2 1/2% 5% and 10%)
>
> On a related note, I am looking at the c-state menu governor.
>
> We seem to have issues there, with Linux often going into a much
> deeper C state than warranted, which can lead to a fairly steep
> performance penalty for some workloads.
>
predicting the future is hard.
if you pick a too deep C state, you get a certain fixed performance hit
if you pick a too shallow C state, you get a pretty large power hit (depending on how long you actually stay idle)
also would need to know hw details; at least on Intel a bunch of things are done
by the firmware and some platforms we're not doing the right things as Linux
(or BIOS)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists