[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <167361172839.4906.18334832591447660001.tip-bot2@tip-bot2>
Date: Fri, 13 Jan 2023 12:08:48 -0000
From: "tip-bot2 for Qais Yousef" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
"Qais Yousef (Google)" <qyousef@...alina.io>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] sched/fair: Fixes for capacity inversion detection
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: da07d2f9c153e457e845d4dcfdd13568d71d18a4
Gitweb: https://git.kernel.org/tip/da07d2f9c153e457e845d4dcfdd13568d71d18a4
Author: Qais Yousef <qyousef@...alina.io>
AuthorDate: Thu, 12 Jan 2023 12:27:08
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 13 Jan 2023 11:40:21 +01:00
sched/fair: Fixes for capacity inversion detection
Traversing the Perf Domains requires rcu_read_lock() to be held and is
conditional on sched_energy_enabled(). Ensure right protections applied.
Also skip capacity inversion detection for our own pd; which was an
error.
Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion")
Reported-by: Dietmar Eggemann <dietmar.eggemann@....com>
Signed-off-by: Qais Yousef (Google) <qyousef@...alina.io>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
Link: https://lore.kernel.org/r/20230112122708.330667-3-qyousef@layalina.io
---
kernel/sched/fair.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index be43731..0f87369 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8868,16 +8868,23 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
* * Thermal pressure will impact all cpus in this perf domain
* equally.
*/
- if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+ if (sched_energy_enabled()) {
unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);
- struct perf_domain *pd = rcu_dereference(rq->rd->pd);
+ struct perf_domain *pd;
+
+ rcu_read_lock();
+ pd = rcu_dereference(rq->rd->pd);
rq->cpu_capacity_inverted = 0;
for (; pd; pd = pd->next) {
struct cpumask *pd_span = perf_domain_span(pd);
unsigned long pd_cap_orig, pd_cap;
+ /* We can't be inverted against our own pd */
+ if (cpumask_test_cpu(cpu_of(rq), pd_span))
+ continue;
+
cpu = cpumask_any(pd_span);
pd_cap_orig = arch_scale_cpu_capacity(cpu);
@@ -8902,6 +8909,8 @@ static void update_cpu_capacity(struct sched_domain *sd, int cpu)
break;
}
}
+
+ rcu_read_unlock();
}
trace_sched_cpu_capacity_tp(rq);
Powered by blists - more mailing lists