[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201214093122.GX3040@hirez.programming.kicks-ass.net>
Date: Mon, 14 Dec 2020 10:31:22 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
"Li, Aubrey" <aubrey.li@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Qais Yousef <qais.yousef@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...e.de>, Jiang Biao <benbjiang@...il.com>
Subject: Re: [RFC PATCH v7] sched/fair: select idle cpu from idle cpumask for
task wakeup
On Mon, Dec 14, 2020 at 09:11:29AM +0100, Vincent Guittot wrote:
> On Fri, 11 Dec 2020 at 23:50, Mel Gorman <mgorman@...hsingularity.net> wrote:
> > I originally did something like that on purpose too but Vincent called
> > it out so it is worth mentioning now to avoid surprises. That said, I'm
> > at the point where anything SIS-related simply needs excessive exposure
> > because no matter what it does, someone is unhappy with it.
>
> Yeah, I don't like extending the idle core search loop for something
> that is not related to the idle core search. This adds more work out
> of control of the sis_prop. So I'm still against hiding some idle cpu
> search in idle core search.
The idea, of course, is to do less. The current code is pretty crap in
that it will do a whole bunch of things multiple times.
Also, a possible follow up, would be something like the below (and
remove all the sds->has_idle_cores crud), which brings core scanning
under SIS_PROP.
But it all needs lots of benchmarking :/
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6095,6 +6095,9 @@ static inline bool test_idle_cores(int c
static inline int __select_idle_core(int core, struct cpumask *cpus, int *idle_cpu)
{
+ if (idle_cpu && (available_idle_cpu(core) || sched_idle_cpu(cpu))
+ *idle_cpu = core;
+
return -1;
}
@@ -6109,7 +6112,6 @@ static int select_idle_cpu(struct task_s
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
int this = smp_processor_id();
- bool smt = test_idle_cores(this, false);
int i, cpu, idle_cpu = -1, nr = INT_MAX;
struct sched_domain *this_sd;
u64 time;
@@ -6120,7 +6122,7 @@ static int select_idle_cpu(struct task_s
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
- if (sched_feat(SIS_PROP) && !smt) {
+ if (sched_feat(SIS_PROP)) {
u64 avg_cost, avg_idle, span_avg;
/*
@@ -6140,26 +6142,17 @@ static int select_idle_cpu(struct task_s
}
for_each_cpu_wrap(cpu, cpus, target) {
- if (smt) {
- i = __select_idle_core(cpu, cpus, &idle_cpu);
- if ((unsigned)i < nr_cpumask_bits)
- return i;
-
- } else {
- if (!--nr)
- return -1;
-
- if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) {
- idle_cpu = cpu;
- break;
- }
+ if (!--nr)
+ break;
+
+ i = __select_idle_core(cpu, cpus, &idle_cpu);
+ if ((unsigned)i < nr_cpumask_bits) {
+ idle_cpu = i;
+ break;
}
}
- if (smt)
- set_idle_cores(this, false);
-
- if (sched_feat(SIS_PROP) && !smt) {
+ if (sched_feat(SIS_PROP)) {
time = cpu_clock(this) - time;
update_avg(&this_sd->avg_scan_cost, time);
}
Powered by blists - more mailing lists