[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b638eb73e6da733afd6eb758cc144bf119e1b600.camel@gmx.de>
Date: Fri, 13 Jun 2025 12:20:56 +0200
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, clm@...a.com,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 2/5] sched: Optimize ttwu() / select_task_rq()
On Fri, 2025-06-13 at 11:40 +0200, Peter Zijlstra wrote:
> On Mon, Jun 09, 2025 at 07:01:47AM +0200, Mike Galbraith wrote:
>
> Right; so the problem being that we can race with
> migrate_disable_switch().
Yeah. Most of the time when we do fallback saves us, but we can and do
zip past it, and that turns box various shades of sad.
>
> Does something like this help?
It surely will, but I'll testdrive it. No news is good news.
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3593,7 +3593,7 @@ int select_task_rq(struct task_struct *p
> cpu = p->sched_class->select_task_rq(p, cpu,
> *wake_flags);
> *wake_flags |= WF_RQ_SELECTED;
> } else {
> - cpu = cpumask_any(p->cpus_ptr);
> + cpu = task_cpu(p);
> }
>
> /*
Powered by blists - more mailing lists