[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51AD46CA.9020609@intel.com>
Date: Tue, 04 Jun 2013 09:45:46 +0800
From: Alex Shi <alex.shi@...el.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Paul Turner <pjt@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Namhyung Kim <namhyung@...nel.org>,
Mike Galbraith <efault@....de>,
Morten Rasmussen <morten.rasmussen@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
Michael Wang <wangyun@...ux.vnet.ibm.com>
Subject: Re: [PATCH v5 5/7] sched: compute runnable load avg in cpu_load and
cpu_avg_load_per_task
On 05/07/2013 02:17 PM, Alex Shi wrote:
> On 05/06/2013 07:10 PM, Peter Zijlstra wrote:
>>>> The runnable_avgs themselves actually have a fair bit of history in
>>>> them already (50% is last 32ms); but given that they don't need to be
>>>> cut-off to respond to load being migrated I'm guessing we could
>>>> actually potentially get by with just "instaneous" and "use averages"
>>>> where appropriate?
>> Sure,. worth a try. If things fall over we can always look at it again.
>>
>>>> We always end up having to re-pick/tune them based on a variety of
>>>> workloads; if we can eliminate them I think it would be a win.
>> Agreed, esp. the plethora of weird idx things we currently have. If we need to
>> re-introduce something it would likely only be the busy case and for that we
>> can immediately link to the balance interval or so.
>>
>>
>>
>
> I like to have try bases on this patchset. :)
>
> First, we can remove the idx, to check if the removing is fine for our
> benchmarks, kbuild, dbench, tbench, hackbench, aim7, specjbb etc.
>
> If there are some regression. we can think more.
>
Peter,
I just tried to remove the variety rq.cpu_load, by the following patch.
Because forkexec_idx and busy_idx are all zero, after the patch system just keep cpu_load[0]
and remove other values.
I tried the patch base 3.10-rc3 and latest tip/sched/core with benchmark dbench,tbench,
aim7,hackbench. and oltp of sysbench. Seems performance doesn't change clear.
So, for my tested machines, core2, NHM, SNB, with 2 or 4 CPU sockets, and above tested
benchmark. We are fine to remove the variety cpu_load.
Don't know if there some other concerns on other scenarios.
---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 590d535..f0ca983 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4626,7 +4626,7 @@ static inline void update_sd_lb_stats(struct lb_env *env,
if (child && child->flags & SD_PREFER_SIBLING)
prefer_sibling = 1;
- load_idx = get_sd_load_idx(env->sd, env->idle);
+ load_idx = 0; //get_sd_load_idx(env->sd, env->idle);
do {
int local_group;
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists