[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5162454B.2050502@intel.com>
Date: Mon, 08 Apr 2013 12:19:23 +0800
From: Alex Shi <alex.shi@...el.com>
To: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
CC: mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, efault@....de,
morten.rasmussen@....com, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, len.brown@...el.com,
rafael.j.wysocki@...el.com, jkosina@...e.cz,
clark.williams@...il.com, tony.luck@...el.com,
keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v7 20/21] sched: don't do power balance on share cpu power
domain
On 04/08/2013 11:25 AM, Preeti U Murthy wrote:
> Hi Alex,
>
> I am sorry I overlooked the changes you have made to the power
> scheduling policies.Now you have just two : performance and powersave.
>
> Hence you can ignore my below comments.But if you use group->capacity
> instead of group->weight for threshold,like you did for balance policy
> in your version5 of this patchset, dont you think the below patch can be
> avoided? group->capacity being the threshold will automatically ensure
> that you dont pack onto domains that share cpu power.
this patch is different from balance policy, the powersave still try to
move 2 busy tasks into one cpu core on Intel cpu. It is just don't keep
packing in cpu core, like if there are 2 half busy tasks in one cpu
core, with this patch, each of SMT thread has one half busy task,
without this patch, 2 half busy task are packed into one thread.
The removed balance policy just pack one busy task per cpu core. Yes,
the 'balance' policy has its meaning. but that is different.
--
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists