[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ddbb4070-e7bd-485a-bcd6-d6b9192656d6@linux.ibm.com>
Date: Fri, 13 Dec 2024 20:25:12 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: "H. Peter Anvin" <hpa@...or.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Mario Limonciello <mario.limonciello@....com>,
Meng Li <li.meng@....com>, Huang Rui <ray.huang@....com>,
"Gautham R. Shenoy" <gautham.shenoy@....com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/8] sched/fair: Do not compute NUMA Balancing stats
unnecessarily during lb
On 12/12/24 00:25, K Prateek Nayak wrote:
> Aggregate nr_numa_running and nr_preferred_running when load balancing
> at NUMA domains only. While at it, also move the aggregation below the
> idle_cpu() check since an idle CPU cannot have any preferred tasks.
>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
> ---
> kernel/sched/fair.c | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2c4ebfc82917..ec2a79c8d0e7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10340,7 +10340,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> bool *sg_overloaded,
> bool *sg_overutilized)
> {
> - int i, nr_running, local_group;
> + int i, nr_running, local_group, sd_flags = env->sd->flags;
>
> memset(sgs, 0, sizeof(*sgs));
>
> @@ -10364,10 +10364,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> if (cpu_overutilized(i))
> *sg_overutilized = 1;
>
> -#ifdef CONFIG_NUMA_BALANCING
> - sgs->nr_numa_running += rq->nr_numa_running;
> - sgs->nr_preferred_running += rq->nr_preferred_running;
> -#endif
> /*
> * No need to call idle_cpu() if nr_running is not 0
> */
> @@ -10377,10 +10373,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> continue;
> }
>
> +#ifdef CONFIG_NUMA_BALANCING
> + /* Only fbq_classify_group() uses this to classify NUMA groups */
> + if (sd_flags & SD_NUMA) {
> + sgs->nr_numa_running += rq->nr_numa_running;
> + sgs->nr_preferred_running += rq->nr_preferred_running;
> + }
> +#endif
> if (local_group)
> continue;
>
> - if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
> + if (sd_flags & SD_ASYM_CPUCAPACITY) {
> /* Check for a misfit task on the cpu */
> if (sgs->group_misfit_task_load < rq->misfit_task_load) {
> sgs->group_misfit_task_load = rq->misfit_task_load;
Reviewed-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
Powered by blists - more mailing lists