[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-2665621506e178a1f62e59200403c359c463ea5e@git.kernel.org>
Date: Mon, 5 Sep 2016 04:55:17 -0700
From: tip-bot for Dietmar Eggemann <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: dietmar.eggemann@....com, peterz@...radead.org,
torvalds@...ux-foundation.org, morten.rasmussen@....com,
linux-kernel@...r.kernel.org, vincent.guittot@...aro.org,
tglx@...utronix.de, hpa@...or.com, mingo@...nel.org,
yuyang.du@...el.com
Subject: [tip:sched/core] sched/fair: Fix load_above_capacity fixed point
arithmetic width
Commit-ID: 2665621506e178a1f62e59200403c359c463ea5e
Gitweb: http://git.kernel.org/tip/2665621506e178a1f62e59200403c359c463ea5e
Author: Dietmar Eggemann <dietmar.eggemann@....com>
AuthorDate: Wed, 10 Aug 2016 11:27:27 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 5 Sep 2016 13:29:44 +0200
sched/fair: Fix load_above_capacity fixed point arithmetic width
Since commit:
2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels")
we now have two different fixed point units for load.
load_above_capacity has to have 10 bits fixed point unit like PELT,
whereas NICE_0_LOAD has 20 bit fixed point unit on 64-bit kernels.
Fix this by scaling down NICE_0_LOAD when multiplying
load_above_capacity with it.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Vincent Guittot <vincent.guittot@...aro.org>
Acked-by: Morten Rasmussen <morten.rasmussen@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Yuyang Du <yuyang.du@...el.com>
Link: http://lkml.kernel.org/r/1470824847-5316-1-git-send-email-dietmar.eggemann@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a18aae..6011bfe 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7193,7 +7193,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
if (load_above_capacity > busiest->group_capacity) {
load_above_capacity -= busiest->group_capacity;
- load_above_capacity *= NICE_0_LOAD;
+ load_above_capacity *= scale_load_down(NICE_0_LOAD);
load_above_capacity /= busiest->group_capacity;
} else
load_above_capacity = ~0UL;
Powered by blists - more mailing lists