[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250407134712.93062-1-hupu.gm@gmail.com>
Date: Mon, 7 Apr 2025 21:47:12 +0800
From: hupu <hupu.gm@...il.com>
To: jstultz@...gle.com,
linux-kernel@...r.kernel.org
Cc: juri.lelli@...hat.com,
peterz@...radead.org,
vschneid@...hat.com,
mingo@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
hupu@...nssion.com,
hupu <hupu.gm@...il.com>
Subject: [RFC 1/1] sched: Skip redundant operations for proxy tasks needing return migration
Move the proxy_needs_return() check earlier in ttwu_runnable() to avoid
unnecessary scheduling operations when a proxy task requires return
migration to its original CPU.
The current implementation performs several operations (rq clock update,
enqueue, and wakeup preemption checks) before checking for return
migration needs. This is inefficient because:
1. For tasks needing return migration, these operations are redundant
since the task will be dequeued from current rq anyway
2. The task may not even be allowed to run on current CPU due to
possible affinity changes during blocking
3. The proper CPU selection will be handled by select_task_rq() in
the subsequent try_to_wake_up() logic
By moving the proxy_needs_return() check to the beginning, we:
- Avoid unnecessary rq clock updates
- Skip redundant enqueue operations
- Eliminate meaningless wakeup preemption checks
- Let the normal wakeup path handle proper CPU selection
This optimization is particularly valuable in proxy execution scenarios
where tasks frequently migrate between CPUs.
Signed-off-by: hupu <hupu.gm@...il.com>
---
kernel/sched/core.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ca4ca739eb85..ebb4bc1800e3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4162,6 +4162,10 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
rq = __task_rq_lock(p, &rf);
if (task_on_rq_queued(p)) {
+ if (proxy_needs_return(rq, p)) {
+ _trace_sched_pe_return_migration(p);
+ goto out;
+ }
update_rq_clock(rq);
if (p->se.sched_delayed) {
proxy_remove_from_sleeping_owner(p);
@@ -4174,10 +4178,6 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
*/
wakeup_preempt(rq, p, wake_flags);
}
- if (proxy_needs_return(rq, p)) {
- _trace_sched_pe_return_migration(p);
- goto out;
- }
ttwu_do_wakeup(p);
ret = 1;
}
--
2.47.0
Powered by blists - more mailing lists