[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-1ad3aaf3fcd2444406628a19a9b9e0922b95e2d4@git.kernel.org>
Date: Thu, 8 Jun 2017 02:22:53 -0700
From: tip-bot for Peter Zijlstra <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: clm@...com, torvalds@...ux-foundation.org, efault@....de,
mingo@...nel.org, kitsunyan@...ox.ru, peterz@...radead.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
matt@...eblueprint.co.uk, hpa@...or.com
Subject: [tip:sched/core] sched/core: Implement new approach to scale
select_idle_cpu()
Commit-ID: 1ad3aaf3fcd2444406628a19a9b9e0922b95e2d4
Gitweb: http://git.kernel.org/tip/1ad3aaf3fcd2444406628a19a9b9e0922b95e2d4
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Wed, 17 May 2017 12:53:50 +0200
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 8 Jun 2017 10:25:17 +0200
sched/core: Implement new approach to scale select_idle_cpu()
Hackbench recently suffered a bunch of pain, first by commit:
4c77b18cf8b7 ("sched/fair: Make select_idle_cpu() more aggressive")
and then by commit:
c743f0a5c50f ("sched/fair, cpumask: Export for_each_cpu_wrap()")
which fixed a bug in the initial for_each_cpu_wrap() implementation
that made select_idle_cpu() even more expensive. The bug was that it
would skip over CPUs when bits were consequtive in the bitmask.
This however gave me an idea to fix select_idle_cpu(); where the old
scheme was a cliff-edge throttle on idle scanning, this introduces a
more gradual approach. Instead of stopping to scan entirely, we limit
how many CPUs we scan.
Initial benchmarks show that it mostly recovers hackbench while not
hurting anything else, except Mason's schbench, but not as bad as the
old thing.
It also appears to recover the tbench high-end, which also suffered like
hackbench.
Tested-by: Matt Fleming <matt@...eblueprint.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Chris Mason <clm@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: hpa@...or.com
Cc: kitsunyan <kitsunyan@...ox.ru>
Cc: linux-kernel@...r.kernel.org
Cc: lvenanci@...hat.com
Cc: riel@...hat.com
Cc: xiaolong.ye@...el.com
Link: http://lkml.kernel.org/r/20170517105350.hk5m4h4jb6dfr65a@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/fair.c | 21 ++++++++++++++++-----
kernel/sched/features.h | 1 +
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 47a0c55..396bca9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5794,27 +5794,38 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
struct sched_domain *this_sd;
- u64 avg_cost, avg_idle = this_rq()->avg_idle;
+ u64 avg_cost, avg_idle;
u64 time, cost;
s64 delta;
- int cpu;
+ int cpu, nr = INT_MAX;
this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
if (!this_sd)
return -1;
- avg_cost = this_sd->avg_scan_cost;
-
/*
* Due to large variance we need a large fuzz factor; hackbench in
* particularly is sensitive here.
*/
- if (sched_feat(SIS_AVG_CPU) && (avg_idle / 512) < avg_cost)
+ avg_idle = this_rq()->avg_idle / 512;
+ avg_cost = this_sd->avg_scan_cost + 1;
+
+ if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost)
return -1;
+ if (sched_feat(SIS_PROP)) {
+ u64 span_avg = sd->span_weight * avg_idle;
+ if (span_avg > 4*avg_cost)
+ nr = div_u64(span_avg, avg_cost);
+ else
+ nr = 4;
+ }
+
time = local_clock();
for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
+ if (!--nr)
+ return -1;
if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
continue;
if (idle_cpu(cpu))
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index dc4d148..d3fb155 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -55,6 +55,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
*/
SCHED_FEAT(SIS_AVG_CPU, false)
+SCHED_FEAT(SIS_PROP, true)
/*
* Issue a WARN when we do multiple update_rq_clock() calls
Powered by blists - more mailing lists