[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1302833497-19506-1-git-send-email-alex.shi@intel.com>
Date: Fri, 15 Apr 2011 10:11:37 +0800
From: Alex Shi <alex.shi@...el.com>
To: a.p.zijlstra@...llo.nl, linux-kernel@...r.kernel.org
Cc: tim.c.chen@...el.com, suresh.b.siddha@...el.com
Subject: [PATCH] sched:make ads.avg_load update in time
commit 866ab43efd325fae cause hackbench benchmark process mode
performance dropping about 15% on our x86_64 machines. The patch works
as its origin purpose, but it cause nearly double context switch on
hackbench running. The sds.avg_load was not updated in time cause this.
So when move the sds.avg_load update before group_imb checking,
performance recovered totally.
Signed-off-by: Alex Shi <alex.shi@...el.com>
---
kernel/sched_fair.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 7f00772..036b660 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -3127,6 +3127,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
if (!sds.busiest || sds.busiest_nr_running == 0)
goto out_balanced;
+ sds->avg_load = (SCHED_LOAD_SCALE * sds->total_load) / sds->total_pwr;
+
/*
* If the busiest group is imbalanced the below checks don't
* work because they assumes all things are equal, which typically
@@ -3151,7 +3153,6 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* Don't pull any tasks if this group is already above the domain
* average load.
*/
- sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;
if (sds.this_load >= sds.avg_load)
goto out_balanced;
--
1.7.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists