[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1198e424-cd1d-407d-8050-c561e57dd2b3@amd.com>
Date: Thu, 11 Sep 2025 12:22:23 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Yury Norov <yury.norov@...il.com>, Ingo Molnar <mingo@...hat.com>, "Peter
Zijlstra" <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
"Vincent Guittot" <vincent.guittot@...aro.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/fair: use cpumask_weight_and() in
sched_balance_find_dst_group()
Hello Yury,
On 9/11/2025 9:14 AM, Yury Norov wrote:
> From: Yury Norov (NVIDIA) <yury.norov@...il.com>
>
> In the group_has_spare case, the function creates a temporary cpumask
> to just calculate weight of (p->cpus_ptr & sched_group_span(local)).
>
> We've got a dedicated helper for it.
Neat! I didn't realize this existed back when I added that cpumask_and()
+ cpumask_weight() combo. Please feel free to include:
Reviewed-by: K Prateek Nayak <kprateek.nayak@....com>
>
> Signed-off-by: Yury Norov (NVIDIA) <yury.norov@...il.com>
> ---
> kernel/sched/fair.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7229339cbb1b..4ec012912cd1 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10821,10 +10821,9 @@ sched_balance_find_dst_group(struct sched_domain *sd, struct task_struct *p, int
> * take care of it.
> */
> if (p->nr_cpus_allowed != NR_CPUS) {
> - struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> -
> - cpumask_and(cpus, sched_group_span(local), p->cpus_ptr);
> - imb_numa_nr = min(cpumask_weight(cpus), sd->imb_numa_nr);
> + unsigned w = cpumask_weight_and(p->cpus_ptr,
> + sched_group_span(local));
> + imb_numa_nr = min(w, sd->imb_numa_nr);
> }
>
> imbalance = abs(local_sgs.idle_cpus - idlest_sgs.idle_cpus);
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists