[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201203141936.GV3371@techsingularity.net>
Date: Thu, 3 Dec 2020 14:19:36 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Aubrey Li <aubrey.li@...ux.intel.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Ziljstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Linux-ARM <linux-arm-kernel@...ts.infradead.org>
Subject: [PATCH 09/10] sched/fair: Limit the search for an idle core
Note: This is a bad idea, it's for illustration only to show how the
search space can be filtered at each stage. Searching an
idle_cpu_mask would be a potential option. select_idle_core()
would be left alone as it has its own throttling mechanism
select_idle_core() may search a full domain for an idle core even if idle
CPUs exist result in an excessive search. This patch partially limits
the search for an idle core similar to select_idle_cpu() once an idle
candidate is found.
Note that this patch can *increase* the number of runqueues considered.
Any searching done by select_idle_core() is duplicated by select_idle_cpu()
if an idle candidate is not found. If there is an idle CPU then aborting
select_idle_core() can have a negative impact. This is addressed in the
next patch.
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
kernel/sched/fair.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 33ce65b67381..cd95daf9f53e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6095,7 +6095,8 @@ void __update_idle_core(struct rq *rq)
* there are no idle cores left in the system; tracked through
* sd_llc->shared->has_idle_cores and enabled through update_idle_core() above.
*/
-static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_core(struct task_struct *p, struct sched_domain *sd,
+ int target, int nr)
{
int idle_candidate = -1;
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
@@ -6115,6 +6116,11 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
for_each_cpu(cpu, cpu_smt_mask(core)) {
schedstat_inc(this_rq()->sis_scanned);
+
+ /* Apply limits if there is an idle candidate */
+ if (idle_candidate != -1)
+ nr--;
+
if (!available_idle_cpu(cpu)) {
idle = false;
if (idle_candidate != -1)
@@ -6130,6 +6136,9 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
if (idle)
return core;
+ if (!nr)
+ break;
+
cpumask_andnot(cpus, cpus, cpu_smt_mask(core));
}
@@ -6165,7 +6174,8 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
#else /* CONFIG_SCHED_SMT */
-static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
+static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd,
+ int target, int nr)
{
return -1;
}
@@ -6349,7 +6359,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
depth = sis_search_depth(sd, this_sd);
schedstat_inc(this_rq()->sis_domain_search);
- i = select_idle_core(p, sd, target);
+ i = select_idle_core(p, sd, target, depth);
if ((unsigned)i < nr_cpumask_bits)
return i;
--
2.26.2
Powered by blists - more mailing lists