[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <166601794728.401.12715574315291898146.tip-bot2@tip-bot2>
Date: Mon, 17 Oct 2022 14:45:47 -0000
From: "tip-bot2 for Lin Shengwang" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Lin Shengwang <linshengwang1@...wei.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] sched/core: Fix comparison in sched_group_cookie_match()
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: e705968dd687574b6ca3ebe772683d5642759132
Gitweb: https://git.kernel.org/tip/e705968dd687574b6ca3ebe772683d5642759132
Author: Lin Shengwang <linshengwang1@...wei.com>
AuthorDate: Sat, 08 Oct 2022 10:27:09 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 17 Oct 2022 16:41:24 +02:00
sched/core: Fix comparison in sched_group_cookie_match()
In commit 97886d9dcd86 ("sched: Migration changes for core scheduling"),
sched_group_cookie_match() was added to help determine if a cookie
matches the core state.
However, while it iterates the SMT group, it fails to actually use the
RQ for each of the CPUs iterated, use cpu_rq(cpu) instead of rq to fix
things.
Fixes: 97886d9dcd86 ("sched: Migration changes for core scheduling")
Signed-off-by: Lin Shengwang <linshengwang1@...wei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20221008022709.642-1-linshengwang1@huawei.com
---
kernel/sched/sched.h | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1644242..0d08511 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1182,6 +1182,14 @@ static inline bool is_migration_disabled(struct task_struct *p)
#endif
}
+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+
+#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
+#define this_rq() this_cpu_ptr(&runqueues)
+#define task_rq(p) cpu_rq(task_cpu(p))
+#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
+#define raw_rq() raw_cpu_ptr(&runqueues)
+
struct sched_group;
#ifdef CONFIG_SCHED_CORE
static inline struct cpumask *sched_group_span(struct sched_group *sg);
@@ -1269,7 +1277,7 @@ static inline bool sched_group_cookie_match(struct rq *rq,
return true;
for_each_cpu_and(cpu, sched_group_span(group), p->cpus_ptr) {
- if (sched_core_cookie_match(rq, p))
+ if (sched_core_cookie_match(cpu_rq(cpu), p))
return true;
}
return false;
@@ -1384,14 +1392,6 @@ static inline void update_idle_core(struct rq *rq)
static inline void update_idle_core(struct rq *rq) { }
#endif
-DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-
-#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
-#define this_rq() this_cpu_ptr(&runqueues)
-#define task_rq(p) cpu_rq(task_cpu(p))
-#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
-#define raw_rq() raw_cpu_ptr(&runqueues)
-
#ifdef CONFIG_FAIR_GROUP_SCHED
static inline struct task_struct *task_of(struct sched_entity *se)
{
Powered by blists - more mailing lists