[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1611554578-6464-1-git-send-email-aubrey.li@intel.com>
Date: Mon, 25 Jan 2021 14:02:58 +0800
From: Aubrey Li <aubrey.li@...el.com>
To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...hsingularity.net, bristot@...hat.com
Cc: linux-kernel@...r.kernel.org, Aubrey Li <aubrey.li@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Aubrey Li <aubrey.li@...ux.intel.com>
Subject: [RFC PATCH v1] sched/fair: limit load balance redo times at the same sched_domain level
A long-tail load balance cost is observed on the newly idle path,
this is caused by a race window between the first nr_running check
of the busiest runqueue and its nr_running recheck in detach_tasks.
Before the busiest runqueue is locked, the tasks on the busiest
runqueue could be pulled by other CPUs and nr_running of the busiest
runqueu becomes 1, this causes detach_tasks breaks with LBF_ALL_PINNED
flag set, and triggers load_balance redo at the same sched_domain level.
In order to find the new busiest sched_group and CPU, load balance will
recompute and update the various load statistics, which eventually leads
to the long-tail load balance cost.
This patch introduces a variable(sched_nr_lb_redo) to limit load balance
redo times, combined with sysctl_sched_nr_migrate, the max load balance
cost is reduced from 100+ us to 70+ us, measured on a 4s x86 system with
192 logical CPUs.
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Signed-off-by: Aubrey Li <aubrey.li@...ux.intel.com>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ae7ceba..b59f371 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7407,6 +7407,8 @@ struct lb_env {
unsigned int loop;
unsigned int loop_break;
unsigned int loop_max;
+ unsigned int redo_cnt;
+ unsigned int redo_max;
enum fbq_type fbq_type;
enum migration_type migration_type;
@@ -9525,6 +9527,7 @@ static int should_we_balance(struct lb_env *env)
return group_balance_cpu(sg) == env->dst_cpu;
}
+static const unsigned int sched_nr_lb_redo = 1;
/*
* Check this_cpu to ensure it is balanced within domain. Attempt to move
* tasks if there is an imbalance.
@@ -9547,6 +9550,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
.dst_grpmask = sched_group_span(sd->groups),
.idle = idle,
.loop_break = sched_nr_migrate_break,
+ .redo_max = sched_nr_lb_redo,
.cpus = cpus,
.fbq_type = all,
.tasks = LIST_HEAD_INIT(env.tasks),
@@ -9682,7 +9686,8 @@ static int load_balance(int this_cpu, struct rq *this_rq,
* destination group that is receiving any migrated
* load.
*/
- if (!cpumask_subset(cpus, env.dst_grpmask)) {
+ if (!cpumask_subset(cpus, env.dst_grpmask) &&
+ ++env.redo_cnt < env.redo_max) {
env.loop = 0;
env.loop_break = sched_nr_migrate_break;
goto redo;
--
2.7.4
Powered by blists - more mailing lists