[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <173693263563.31546.3711049219135715005.tip-bot2@tip-bot2>
Date: Wed, 15 Jan 2025 09:17:15 -0000
From: "tip-bot2 for K Prateek Nayak" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: K Prateek Nayak <kprateek.nayak@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Vincent Guittot <vincent.guittot@...aro.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/fair: Do not compute NUMA Balancing stats
unnecessarily during lb
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0ac1ee9ebfb7fa2af4a267fe0e8fa275ba8ec6fc
Gitweb: https://git.kernel.org/tip/0ac1ee9ebfb7fa2af4a267fe0e8fa275ba8ec6fc
Author: K Prateek Nayak <kprateek.nayak@....com>
AuthorDate: Mon, 23 Dec 2024 04:34:05
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Mon, 13 Jan 2025 14:10:25 +01:00
sched/fair: Do not compute NUMA Balancing stats unnecessarily during lb
Aggregate nr_numa_running and nr_preferred_running when load balancing
at NUMA domains only. While at it, also move the aggregation below the
idle_cpu() check since an idle CPU cannot have any preferred tasks.
Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>
Link: https://lore.kernel.org/r/20241223043407.1611-7-kprateek.nayak@amd.com
---
kernel/sched/fair.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 52f7278..650d698 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10409,7 +10409,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
bool *sg_overloaded,
bool *sg_overutilized)
{
- int i, nr_running, local_group;
+ int i, nr_running, local_group, sd_flags = env->sd->flags;
memset(sgs, 0, sizeof(*sgs));
@@ -10433,10 +10433,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
if (cpu_overutilized(i))
*sg_overutilized = 1;
-#ifdef CONFIG_NUMA_BALANCING
- sgs->nr_numa_running += rq->nr_numa_running;
- sgs->nr_preferred_running += rq->nr_preferred_running;
-#endif
/*
* No need to call idle_cpu() if nr_running is not 0
*/
@@ -10446,10 +10442,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
continue;
}
+#ifdef CONFIG_NUMA_BALANCING
+ /* Only fbq_classify_group() uses this to classify NUMA groups */
+ if (sd_flags & SD_NUMA) {
+ sgs->nr_numa_running += rq->nr_numa_running;
+ sgs->nr_preferred_running += rq->nr_preferred_running;
+ }
+#endif
if (local_group)
continue;
- if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
+ if (sd_flags & SD_ASYM_CPUCAPACITY) {
/* Check for a misfit task on the cpu */
if (sgs->group_misfit_task_load < rq->misfit_task_load) {
sgs->group_misfit_task_load = rq->misfit_task_load;
Powered by blists - more mailing lists