lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADHxFxTLacN3o45WbDCLpMVb6oz2O4EeetXZkXgYDOcSJVqP-g@mail.gmail.com>
Date: Wed, 9 Apr 2025 15:11:47 +0800
From: hupu <hupu.gm@...il.com>
To: jstultz@...gle.com, linux-kernel@...r.kernel.org
Cc: juri.lelli@...hat.com, peterz@...radead.org, vschneid@...hat.com, 
	mingo@...hat.com, vincent.guittot@...aro.org, dietmar.eggemann@....com, 
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de, hupu@...nssion.com
Subject: Re: [RFC 1/1] sched: Skip redundant operations for proxy tasks
 needing return migration

Dear maintainer,
My sincere apologies for the oversight in my previous submission. I
realized I failed to provide the repository and branch details for the
patch context. Here are the specifics:

The patch is based on the following repository:
https://github.com/johnstultz-work/linux-dev.git

Specifically, the changes are present in these Proxy Execution
development branches:
a) proxy-exec-WIP
b) proxy-exec-v15-WIP

The changes are part of the ongoing Proxy Execution feature
development in these branches.
Please let me know if you need further clarifications. Thank you for
your understanding and patience.

On Mon, Apr 7, 2025 at 9:47 PM hupu <hupu.gm@...il.com> wrote:
>
> Move the proxy_needs_return() check earlier in ttwu_runnable() to avoid
> unnecessary scheduling operations when a proxy task requires return
> migration to its original CPU.
>
> The current implementation performs several operations (rq clock update,
> enqueue, and wakeup preemption checks) before checking for return
> migration needs. This is inefficient because:
>
> 1. For tasks needing return migration, these operations are redundant
>    since the task will be dequeued from current rq anyway
> 2. The task may not even be allowed to run on current CPU due to
>    possible affinity changes during blocking
> 3. The proper CPU selection will be handled by select_task_rq() in
>    the subsequent try_to_wake_up() logic
>
> By moving the proxy_needs_return() check to the beginning, we:
> - Avoid unnecessary rq clock updates
> - Skip redundant enqueue operations
> - Eliminate meaningless wakeup preemption checks
> - Let the normal wakeup path handle proper CPU selection
>
> This optimization is particularly valuable in proxy execution scenarios
> where tasks frequently migrate between CPUs.
>
> Signed-off-by: hupu <hupu.gm@...il.com>
> ---
>  kernel/sched/core.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ca4ca739eb85..ebb4bc1800e3 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4162,6 +4162,10 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
>
>         rq = __task_rq_lock(p, &rf);
>         if (task_on_rq_queued(p)) {
> +               if (proxy_needs_return(rq, p)) {
> +                       _trace_sched_pe_return_migration(p);
> +                       goto out;
> +               }
>                 update_rq_clock(rq);
>                 if (p->se.sched_delayed) {
>                         proxy_remove_from_sleeping_owner(p);
> @@ -4174,10 +4178,6 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
>                          */
>                         wakeup_preempt(rq, p, wake_flags);
>                 }
> -               if (proxy_needs_return(rq, p)) {
> -                       _trace_sched_pe_return_migration(p);
> -                       goto out;
> -               }
>                 ttwu_do_wakeup(p);
>                 ret = 1;
>         }
> --
> 2.47.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ