[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1500038464-8742-6-git-send-email-josef@toxicpanda.com>
Date: Fri, 14 Jul 2017 13:21:02 +0000
From: Josef Bacik <josef@...icpanda.com>
To: mingo@...hat.com, peterz@...radead.org,
linux-kernel@...r.kernel.org, umgwanakikbuti@...il.com,
tj@...nel.org, kernel-team@...com
Cc: Josef Bacik <jbacik@...com>
Subject: [PATCH 5/7] sched/fair: use the task weight instead of average in effective_load
From: Josef Bacik <jbacik@...com>
This is a preparation patch for the next patch. When adding a new task to a
cfs_rq we do not add our load_avg to the existing cfs_rq, we add our weight, and
that changes how the load average moves as the cfs_rq/task runs. Using the load
average in our effective_load calculation is going to be slightly inaccurate
from what we want to be computing (the actual effect of waking this task on this
cpu), and is going to bias us towards always affinity waking tasks.
Signed-off-by: Josef Bacik <jbacik@...com>
---
kernel/sched/fair.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ee8dced..4e4fc5d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5646,7 +5646,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
s64 this_eff_load, prev_eff_load;
int idx, this_cpu;
struct task_group *tg;
- unsigned long weight;
+ unsigned long weight, avg;
int balanced;
idx = sd->wake_idx;
@@ -5661,14 +5661,15 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
*/
if (sync) {
tg = task_group(current);
- weight = current->se.avg.load_avg;
+ weight = se_weight(¤t->se);
+ avg = current->se.avg.load_avg;
- this_load += effective_load(tg, this_cpu, -weight, -weight);
- load += effective_load(tg, prev_cpu, 0, -weight);
+ this_load += effective_load(tg, this_cpu, -avg, -weight);
}
tg = task_group(p);
- weight = p->se.avg.load_avg;
+ weight = se_weight(&p->se);
+ avg = p->se.avg.load_avg;
/*
* In low-load situations, where prev_cpu is idle and this_cpu is idle
@@ -5687,7 +5688,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
if (this_load > 0) {
this_eff_load *= this_load +
- effective_load(tg, this_cpu, weight, weight);
+ effective_load(tg, this_cpu, avg, weight);
prev_eff_load *= load;
}
--
2.9.3
Powered by blists - more mailing lists