[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251119124449.1149616-7-sshegde@linux.ibm.com>
Date: Wed, 19 Nov 2025 18:14:38 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Cc: sshegde@...ux.ibm.com, mingo@...hat.com, peterz@...radead.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org, tglx@...utronix.de,
yury.norov@...il.com, maddy@...ux.ibm.com, srikar@...ux.ibm.com,
gregkh@...uxfoundation.org, pbonzini@...hat.com, seanjc@...gle.com,
kprateek.nayak@....com, vschneid@...hat.com, iii@...ux.ibm.com,
huschle@...ux.ibm.com, rostedt@...dmis.org, dietmar.eggemann@....com,
christophe.leroy@...roup.eu
Subject: [PATCH 06/17] sched/fair: Pass current cpu in select_idle_sibling
Pattern in select_task_rq_fair:
cpu = smp_processor_id();
new_cpu = prev_cpu;
//May change new_cpu due to wake_affine, otherwise it remains prev_cpu
new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
Due to this often prev_cpu == new_cpu. If the task was sleeping when
the prev_cpu was marked as paravirt, it would be beneficial to choose current
cpu instead. If the current cpu is paravirt too, then wakeup will happen there and
at next tick task will move out.
So pass current CPU as well in the select_idle_sibling.
Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
---
kernel/sched/fair.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1855975b8248..015e00b370c9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1048,7 +1048,7 @@ static bool update_deadline(struct cfs_rq *cfs_rq, struct sched_entity *se)
#include "pelt.h"
-static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
+static int select_idle_sibling(struct task_struct *p, int this_cpu, int prev, int target);
static unsigned long task_h_load(struct task_struct *p);
static unsigned long capacity_of(int cpu);
@@ -7770,7 +7770,7 @@ static inline bool asym_fits_cpu(unsigned long util,
/*
* Try and locate an idle core/thread in the LLC cache domain.
*/
-static int select_idle_sibling(struct task_struct *p, int prev, int target)
+static int select_idle_sibling(struct task_struct *p, int this_cpu, int prev, int target)
{
bool has_idle_core = false;
struct sched_domain *sd;
@@ -8578,7 +8578,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
new_cpu = sched_balance_find_dst_cpu(sd, p, cpu, prev_cpu, sd_flag);
} else if (wake_flags & WF_TTWU) { /* XXX always ? */
/* Fast path */
- new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
+ new_cpu = select_idle_sibling(p, cpu, prev_cpu, new_cpu);
}
rcu_read_unlock();
--
2.47.3
Powered by blists - more mailing lists