[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADHxFxSVdt_oG=J=aJDfkOcYEBScUxKV=NZNUvgtkAj6sbWvGA@mail.gmail.com>
Date: Wed, 4 Jun 2025 16:13:34 +0800
From: hupu <hupu.gm@...il.com>
To: John Stultz <jstultz@...gle.com>
Cc: peterz@...radead.org, linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
vschneid@...hat.com, mingo@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, hupu@...nssion.com
Subject: Re: [RFC 1/1] sched: Skip redundant operations for proxy tasks
needing return migration
Hi John
Thank you for your response.
>
> This looks identical to the version above, or am I missing something?
>
I sincerely apologize for the confusion caused by my unclear
explanation. The complete patch for version 2.0 is as follows. In this
patch, I relocated the `proxy_needs_return()` check to execute after
`sched_delayed` processing but before the `wakeup_preempt()` judgment.
This optimization allows skipping redundant `wakeup_preempt()`
operations when a donor task must migrate back to its original CPU, as
it becomes unnecessary in such cases.
Subject: [RFC] sched: Skip redundant operations when donor needs return.
Move the proxy_needs_return() check earlier in ttwu_runnable()
to minimize unnecessary operations, particularly in cases
where a donor task needs to migrate back to its original CPU.
Signed-off-by: hupu <hupu.gm@...il.com>
---
kernel/sched/core.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
mode change 100644 => 100755 kernel/sched/core.c
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
old mode 100644
new mode 100755
index 06e9924d3f77..2c863ad53173
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4164,6 +4164,10 @@ static int ttwu_runnable(struct task_struct *p,
int wake_flags)
proxy_remove_from_sleeping_owner(p);
enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
}
+ if (proxy_needs_return(rq, p)) {
+ _trace_sched_pe_return_migration(p);
+ goto out;
+ }
if (!task_on_cpu(rq, p)) {
/*
* When on_rq && !on_cpu the task is preempted, see if
@@ -4171,10 +4175,6 @@ static int ttwu_runnable(struct task_struct *p,
int wake_flags)
*/
wakeup_preempt(rq, p, wake_flags);
}
- if (proxy_needs_return(rq, p)) {
- _trace_sched_pe_return_migration(p);
- goto out;
- }
ttwu_do_wakeup(p);
ret = 1;
}
--
2.17.1
> Hrm. Can you walk me through the specific case you're thinking about here?
>
> Is the idea something like: a mutex blocked task (not sched_delayed)
> gets migrated to a rq, where it acts as a donor so that a lock holder
> can be run.
> If the lock holder sleeps, it might be set as sched_delayed, but the
> donor will be dequeued from the rq and enqueued onto the sched_delayed
> sleeping owner.
>
> And the concern is that in doing this, the donor's lag from the rq it
> was migrated to won't be preserved (since it isn't set as
> sched_delayed)?
>
> I'll need to think on this a bit, as I don't quite have my head around
> how mutex blocked tasks might also end up sched_delayed.
>
I need to add some debugging logs to further investigate this issue.
This may take a bit of time, and I will get back to you shortly.
Thanks.
hupu
Powered by blists - more mailing lists