[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <168959857267.28540.7534555852542758519.tip-bot2@tip-bot2>
Date: Mon, 17 Jul 2023 12:56:12 -0000
From: "tip-bot2 for Tim C Chen" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Tim Chen <tim.c.chen@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/topology: Record number of cores in sched group
The following commit has been merged into the sched/core branch of tip:
Commit-ID: d24cb0d9113f5932b8832533ce82351b5911ed50
Gitweb: https://git.kernel.org/tip/d24cb0d9113f5932b8832533ce82351b5911ed50
Author: Tim C Chen <tim.c.chen@...ux.intel.com>
AuthorDate: Fri, 07 Jul 2023 15:57:01 -07:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 13 Jul 2023 15:21:51 +02:00
sched/topology: Record number of cores in sched group
When balancing sibling domains that have different number of cores,
tasks in respective sibling domain should be proportional to the
number of cores in each domain. In preparation of implementing such a
policy, record the number of cores in a scheduling group.
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lore.kernel.org/r/04641eeb0e95c21224352f5743ecb93dfac44654.1688770494.git.tim.c.chen@linux.intel.com
---
kernel/sched/sched.h | 1 +
kernel/sched/topology.c | 12 +++++++++++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1dcea9b..9baeb1a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1884,6 +1884,7 @@ struct sched_group {
atomic_t ref;
unsigned int group_weight;
+ unsigned int cores;
struct sched_group_capacity *sgc;
int asym_prefer_cpu; /* CPU of highest priority in group */
int flags;
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d3a3b26..7cfcfe5 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1275,14 +1275,24 @@ build_sched_groups(struct sched_domain *sd, int cpu)
static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
{
struct sched_group *sg = sd->groups;
+ struct cpumask *mask = sched_domains_tmpmask2;
WARN_ON(!sg);
do {
- int cpu, max_cpu = -1;
+ int cpu, cores = 0, max_cpu = -1;
sg->group_weight = cpumask_weight(sched_group_span(sg));
+ cpumask_copy(mask, sched_group_span(sg));
+ for_each_cpu(cpu, mask) {
+ cores++;
+#ifdef CONFIG_SCHED_SMT
+ cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
+#endif
+ }
+ sg->cores = cores;
+
if (!(sd->flags & SD_ASYM_PACKING))
goto next;
Powered by blists - more mailing lists