[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231214175551.629945-1-keisuke.nishimura@inria.fr>
Date: Thu, 14 Dec 2023 18:55:50 +0100
From: Keisuke Nishimura <keisuke.nishimura@...ia.fr>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Abel Wu <wuyun.abel@...edance.com>, Josh Don <joshdon@...gle.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Xunlei Pang <xlpang@...ux.alibaba.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Julia Lawall <julia.lawall@...ia.fr>,
linux-kernel@...r.kernel.org,
Keisuke Nishimura <keisuke.nishimura@...ia.fr>
Subject: [PATCH 1/2] sched/fair: take into account scheduling domain in select_idle_smt()
When picking out a CPU on a task wakeup, select_idle_smt() has to take
into account the scheduling domain of @target. This is because cpusets
and isolcpus can remove CPUs from the domain to isolate them from other
SMT siblings.
This fix checks if the candidate CPU is in the target scheduling domain.
The commit df3cb4ea1fb6 ("sched/fair: Fix wrong cpu selecting from isolated
domain") originally proposed this fix by adding the check of the scheduling
domain in the loop. However, the commit 3e6efe87cd5cc ("sched/fair: Remove
redundant check in select_idle_smt()") accidentally removed the check.
This commit brings the check back with the tiny optimization of computing
the intersection of the task's CPU mask and the sched domain mask up front.
Fixes: 3e6efe87cd5c ("sched/fair: Remove redundant check in select_idle_smt()")
Signed-off-by: Keisuke Nishimura <keisuke.nishimura@...ia.fr>
Signed-off-by: Julia Lawall <julia.lawall@...ia.fr>
---
kernel/sched/fair.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bcd0f230e21f..71306b48cf68 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7284,11 +7284,18 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
/*
* Scan the local SMT mask for idle CPUs.
*/
-static int select_idle_smt(struct task_struct *p, int target)
+static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
{
int cpu;
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
+
+ /*
+ * Check if a candidate cpu is in the LLC scheduling domain where target exists.
+ * Due to isolcpus and cpusets, there is no guarantee that it holds.
+ */
+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
- for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) {
+ for_each_cpu_and(cpu, cpu_smt_mask(target), cpus) {
if (cpu == target)
continue;
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
@@ -7314,7 +7321,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
return __select_idle_cpu(core, p);
}
-static inline int select_idle_smt(struct task_struct *p, int target)
+static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
{
return -1;
}
@@ -7564,7 +7571,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
has_idle_core = test_idle_cores(target);
if (!has_idle_core && cpus_share_cache(prev, target)) {
- i = select_idle_smt(p, prev);
+ i = select_idle_smt(p, sd, prev);
if ((unsigned int)i < nr_cpumask_bits)
return i;
}
--
2.34.1
Powered by blists - more mailing lists