lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 05 Oct 2021 14:12:01 -0000
From:   "tip-bot2 for Ricardo Neri" <>
Cc:     "Peter Zijlstra (Intel)" <>,
        Ricardo Neri <>,
        "Joel Fernandes (Google)" <>,
        Len Brown <>,
        Vincent Guittot <>,,
Subject: [tip: sched/core] sched/fair: Provide update_sg_lb_stats() with sched
 domain statistics

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     c0d14b57fe0c11b65ce8a1a4a58a48f3f324ca0f
Author:        Ricardo Neri <>
AuthorDate:    Fri, 10 Sep 2021 18:18:17 -07:00
Committer:     Peter Zijlstra <>
CommitterDate: Tue, 05 Oct 2021 15:52:03 +02:00

sched/fair: Provide update_sg_lb_stats() with sched domain statistics

Before deciding to pull tasks when using asymmetric packing of tasks,
on some architectures (e.g., x86) it is necessary to know not only the
state of dst_cpu but also of its SMT siblings. The decision to classify
a candidate busiest group as group_asym_packing is done in
update_sg_lb_stats(). Give this function access to the scheduling domain
statistics, which contains the statistics of the local group.

Originally-by: Peter Zijlstra (Intel) <>
Signed-off-by: Ricardo Neri <>
Signed-off-by: Peter Zijlstra (Intel) <>
Reviewed-by: Joel Fernandes (Google) <>
Reviewed-by: Len Brown <>
Reviewed-by: Vincent Guittot <>
 kernel/sched/fair.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e050b1d..2e8ef33 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8579,6 +8579,7 @@ group_type group_classify(unsigned int imbalance_pct,
  * @sg_status: Holds flag indicating the status of the sched_group
 static inline void update_sg_lb_stats(struct lb_env *env,
+				      struct sd_lb_stats *sds,
 				      struct sched_group *group,
 				      struct sg_lb_stats *sgs,
 				      int *sg_status)
@@ -8587,7 +8588,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 	memset(sgs, 0, sizeof(*sgs));
-	local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
+	local_group = group == sds->local;
 	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
 		struct rq *rq = cpu_rq(i);
@@ -9150,7 +9151,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 				update_group_capacity(env->sd, env->dst_cpu);
-		update_sg_lb_stats(env, sg, sgs, &sg_status);
+		update_sg_lb_stats(env, sds, sg, sgs, &sg_status);
 		if (local_group)
 			goto next_group;

Powered by blists - more mailing lists