[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <4864E2F4.9080908@sun.com>
Date: Fri, 27 Jun 2008 08:54:12 -0400
From: David Collier-Brown <davecb@....com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>,
Suresh B Siddha <suresh.b.siddha@...el.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Dipankar Sarma <dipankar@...ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Vatsa <vatsa@...ux.vnet.ibm.com>,
Gautham R Shenoy <ego@...ibm.com>
Subject: Re: [RFC v1] Tunable sched_mc_power_savings=n
KOSAKI Motohiro wrote:
> Hi
>
>
>>Advantages:
>>
>>* Enterprise workloads on large hardware configurations may need
>> aggressive consolidation strategy
>>* Performance impact on server is different from desktop or laptops.
>> Interactivity is less of a concern on large enterprise servers while
>> workload response times and performance per watt is more significant
>>* Aggressive power savings even with marginal performance penalty is
>> is a useful tunable for servers since it may provide good
>> performance-per-watt at low utilisation
>>* This tunable can influence other parts of scheduler like wakeup
>> biasing for overall task consolidation
>
>
> I'd like to know how many saving power.
> if there are only small saving, I think this is not interesting feature.
>
> Do you expect how many percentage saving?
>
An experiment using DVFS on Xeon yeilded a 15-watt allowable reduction
even under running a considerable TPC-W workload. Lesser loads allowed
a 40-watt (out of 160) reduction.
--dave
--
David Collier-Brown | Always do right. This will gratify
Sun Microsystems, Toronto | some people and astonish the rest
davecb@....com | -- Mark Twain
(905) 943-1983, cell: (647) 833-9377, (800) 555-9786 x56583
bridge: (877) 385-4099 code: 506 9191#
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists