[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250815011512.6870-1-lirongqing@baidu.com>
Date: Fri, 15 Aug 2025 09:15:12 +0800
From: lirongqing <lirongqing@...du.com>
To: <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
CC: Li RongQing <lirongqing@...du.com>
Subject: [PATCH] sched/fair: Optimize CPU iteration using for_each_cpu_and[not]
From: Li RongQing <lirongqing@...du.com>
Replace open-coded CPU iteration patterns with more efficient
for_each_cpu_and() and for_each_cpu_andnot() macros in three locations.
This change both simplifies the code and provides minor performance
improvements by using the more specialized iteration macros.
Signed-off-by: Li RongQing <lirongqing@...du.com>
---
kernel/sched/fair.c | 16 +++-------------
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b173a05..8794581 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1389,10 +1389,7 @@ static inline bool is_core_idle(int cpu)
#ifdef CONFIG_SCHED_SMT
int sibling;
- for_each_cpu(sibling, cpu_smt_mask(cpu)) {
- if (cpu == sibling)
- continue;
-
+ for_each_cpu_andnot(sibling, cpu_smt_mask(cpu), cpumask_of(cpu)) {
if (!idle_cpu(sibling))
return false;
}
@@ -2474,11 +2471,7 @@ static void task_numa_find_cpu(struct task_numa_env *env,
maymove = !load_too_imbalanced(src_load, dst_load, env);
}
- for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
- /* Skip this CPU if the source task cannot migrate */
- if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
- continue;
-
+ for_each_cpu_and(cpu, cpumask_of_node(env->dst_nid), env->p->cpus_ptr) {
env->dst_cpu = cpu;
if (task_numa_compare(env, taskimp, groupimp, maymove))
break;
@@ -7493,10 +7486,7 @@ void __update_idle_core(struct rq *rq)
if (test_idle_cores(core))
goto unlock;
- for_each_cpu(cpu, cpu_smt_mask(core)) {
- if (cpu == core)
- continue;
-
+ for_each_cpu_andnot(cpu, cpu_smt_mask(core), cpumask_of(core)) {
if (!available_idle_cpu(cpu))
goto unlock;
}
--
2.9.4
Powered by blists - more mailing lists