[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1464001138-25063-10-git-send-email-morten.rasmussen@arm.com>
Date: Mon, 23 May 2016 11:58:51 +0100
From: Morten Rasmussen <morten.rasmussen@....com>
To: peterz@...radead.org, mingo@...hat.com
Cc: dietmar.eggemann@....com, yuyang.du@...el.com,
vincent.guittot@...aro.org, mgalbraith@...e.de,
linux-kernel@...r.kernel.org,
Morten Rasmussen <morten.rasmussen@....com>
Subject: [PATCH 09/16] sched/fair: Let asymmetric cpu configurations balance at wake-up
Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if
SD_BALANCE_WAKE is set on the sched_domains. For asymmetric
configurations SD_WAKE_AFFINE is only desirable if the waking task's
compute demand (utilization) is suitable for the cpu capacities
available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup
balancing take over (find_idlest_{group, cpu}()).
The assumption is that SD_WAKE_AFFINE is never set for a sched_domain
containing cpus with different capacities. This is enforced by a
previous patch based on the SD_ASYM_CPUCAPACITY flag.
Ideally, we shouldn't set 'want_affine' in the first place, but we don't
know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start
traversing them.
cc: Ingo Molnar <mingo@...hat.com>
cc: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
---
kernel/sched/fair.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 564215d..ce44fa7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -114,6 +114,12 @@ unsigned int __read_mostly sysctl_sched_shares_window = 10000000UL;
unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL;
#endif
+/*
+ * The margin used when comparing utilization with cpu capacity:
+ * util * 1024 < capacity * margin
+ */
+unsigned int capacity_margin = 1280; /* ~20% */
+
static inline void update_load_add(struct load_weight *lw, unsigned long inc)
{
lw->weight += inc;
@@ -5293,6 +5299,25 @@ static int cpu_util(int cpu)
return (util >= capacity) ? capacity : util;
}
+static inline int task_util(struct task_struct *p)
+{
+ return p->se.avg.util_avg;
+}
+
+static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
+{
+ long delta;
+ long prev_cap = capacity_of(prev_cpu);
+
+ delta = cpu_rq(cpu)->rd->max_cpu_capacity - prev_cap;
+
+ /* prev_cpu is fairly close to max, no need to abort wake_affine */
+ if (delta < prev_cap >> 3)
+ return 0;
+
+ return prev_cap * 1024 < task_util(p) * capacity_margin;
+}
+
/*
* select_task_rq_fair: Select target runqueue for the waking task in domains
* that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
@@ -5316,7 +5341,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
if (sd_flag & SD_BALANCE_WAKE) {
record_wakee(p);
- want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
+ want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
+ && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
}
rcu_read_lock();
--
1.9.1
Powered by blists - more mailing lists