[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <163220927155.25758.17787965953687869633.tip-bot2@tip-bot2>
Date: Tue, 21 Sep 2021 07:27:51 -0000
From: "tip-bot2 for Ricardo Neri" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Joel Fernandes (Google)" <joel@...lfernandes.org>,
Len Brown <len.brown@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/fair: Carve out logic to mark a group for
asymmetric packing
The following commit has been merged into the sched/core branch of tip:
Commit-ID: f58215ed2ff917dc40e6fb7b2d9b7fd290ec5055
Gitweb: https://git.kernel.org/tip/f58215ed2ff917dc40e6fb7b2d9b7fd290ec5055
Author: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
AuthorDate: Fri, 10 Sep 2021 18:18:18 -07:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Sat, 18 Sep 2021 12:18:40 +02:00
sched/fair: Carve out logic to mark a group for asymmetric packing
Create a separate function, sched_asym(). A subsequent changeset will
introduce logic to deal with SMT in conjunction with asmymmetric
packing. Such logic will need the statistics of the scheduling
group provided as argument. Update them before calling sched_asym().
Co-developed-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
Reviewed-by: Len Brown <len.brown@...el.com>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
Link: https://lkml.kernel.org/r/20210911011819.12184-6-ricardo.neri-calderon@linux.intel.com
---
kernel/sched/fair.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d592de4..6d27375 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8538,6 +8538,13 @@ group_type group_classify(unsigned int imbalance_pct,
return group_has_spare;
}
+static inline bool
+sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
+ struct sched_group *group)
+{
+ return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
+}
+
/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
* @env: The load balancing environment.
@@ -8598,18 +8605,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
}
}
+ sgs->group_capacity = group->sgc->capacity;
+
+ sgs->group_weight = group->group_weight;
+
/* Check if dst CPU is idle and preferred to this group */
if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
- env->idle != CPU_NOT_IDLE &&
- sgs->sum_h_nr_running &&
- sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
+ env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
+ sched_asym(env, sds, sgs, group)) {
sgs->group_asym_packing = 1;
}
- sgs->group_capacity = group->sgc->capacity;
-
- sgs->group_weight = group->group_weight;
-
sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
/* Computing avg_load makes sense only when group is overloaded */
Powered by blists - more mailing lists