[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51C8D1E5.60804@linux.intel.com>
Date: Mon, 24 Jun 2013 16:10:29 -0700
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: Catalin Marinas <catalin.marinas@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
David Lang <david@...g.hm>,
"len.brown@...el.com" <len.brown@...el.com>,
"alex.shi@...el.com" <alex.shi@...el.com>,
"corbet@....net" <corbet@....net>,
"peterz@...radead.org" <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"efault@....de" <efault@....de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"pjt@...gle.com" <pjt@...gle.com>, Ingo Molnar <mingo@...nel.org>
Subject: Re: power-efficient scheduling design
On 6/24/2013 2:59 PM, Benjamin Herrenschmidt wrote:
> On Mon, 2013-06-24 at 08:26 -0700, Arjan van de Ven wrote:
>>
>> to bring the system back up if all cores in the whole system are idle and power gated,
>> memory in SR etc... is typically < 250 usec (depends on the exact version
>> of the cpu etc). But the moment even one core is running, that core will keep the system
>> out of such deep state, and waking up a consecutive entity is much faster
>>
>> to bring just a core out of power gating is more in the 40 to 50 usec range
>
> Out of curiosity, what happens to PCIe when you bring a package down
> like this ?
PCIe devices can communicate latency requirements (LTR) if they need something
more aggressive than this; otherwise 250 usec afaik falls within what doesn't
break (devices need to cope with arbitrage/etc delays anyway)
and with PCIe link power management there are delays regardless; once a PCIe link gets powered
back on the memory controller/etc also will come back online
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists