[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <alpine.LFD.2.00.0910052054540.309@localhost.localdomain>
Date: Mon, 05 Oct 2009 21:28:18 -0400 (EDT)
From: Len Brown <lenb@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, rjw@...k.pl,
balbir@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
linux-acpi@...r.kernel.org, a.p.zijlstra@...llo.nl,
shaohua.li@...el.com, svaidy@...ux.vnet.ibm.com
Subject: Re: [git pull request] ACPI Processor Aggregator Driver for 2.6.32-rc1
> ... we probably never want to even try
> to solve it in the scheduler, because why the hell should we care and add
> complex logic for something like that?
Today we take cores down to 100% idle, one at a time.
This is useful, but isn't the best we can do.
For we get zero "uncore" power savings.
As long as at least one core is active anywhere in the system,
all the uncores in all the packages in the system must remain active.
What a scheduler-based solution could do is
instead of taking, say, 1 of 64 cores down for 100%
of the period, it could take all 64 cores down
for 64th of the same period. This could get the hardware
into the deeper "package C-states", for a measurable
net power savings.
At the same time, this system-wide throttling may mitigate
some of the fairness/availability issues raised regarding
taking cores 100% off-line.
But doing this optimally will not be trivial.
The hardware must stay in the deep-sleep-states long enough
to make it worth the energy to enter and exit those states.
The hardware will flush the caches, having a performance
impact on all the cores. Device interrupts would prevent
the cores from sleeping, so they'd need to be somehow delayed
if we are to sleep long enough to make sleeping worth it etc.
cheers,
-Len Brown, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists