[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK9idgafP4G/ECme@hirez.programming.kicks-ass.net>
Date: Thu, 27 May 2021 11:12:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, Will Deacon <will@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Quentin Perret <qperret@...gle.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
kernel-team@...roid.com
Subject: Re: [PATCH 2/2] sched: Plug race between SCA, hotplug and
migration_cpu_stop()
On Wed, May 26, 2021 at 09:57:51PM +0100, Valentin Schneider wrote:
> - rq = __migrate_task(rq, &rf, p, arg->dest_cpu);
Suggests we ought to at the very least include something like the below.
/me continues reading...
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2246,28 +2246,6 @@ struct set_affinity_pending {
struct migration_arg arg;
};
-/*
- * Move (not current) task off this CPU, onto the destination CPU. We're doing
- * this because either it can't run here any more (set_cpus_allowed()
- * away from this CPU, or CPU going down), or because we're
- * attempting to rebalance this task on exec (sched_exec).
- *
- * So we race with normal scheduler movements, but that's OK, as long
- * as the task is no longer on this CPU.
- */
-static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf,
- struct task_struct *p, int dest_cpu)
-{
- /* Affinity changed (again). */
- if (!is_cpu_allowed(p, dest_cpu))
- return rq;
-
- update_rq_clock(rq);
- rq = move_queued_task(rq, rf, p, dest_cpu);
-
- return rq;
-}
-
static int select_fallback_rq(int cpu, struct task_struct *p);
/*
@@ -2292,7 +2270,7 @@ static int migration_cpu_stop(void *data
local_irq_save(rf.flags);
/*
* We need to explicitly wake pending tasks before running
- * __migrate_task() such that we will not miss enforcing cpus_ptr
+ * move_queued_task() such that we will not miss enforcing cpus_ptr
* during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
*/
flush_smp_call_function_from_idle();
@@ -8463,7 +8441,7 @@ static int __balance_push_cpu_stop(void
if (task_rq(p) == rq && task_on_rq_queued(p)) {
cpu = select_fallback_rq(rq->cpu, p);
- rq = __migrate_task(rq, &rf, p, cpu);
+ rq = move_queued_task(rq, &rf, p, cpu);
}
rq_unlock(rq, &rf);
Powered by blists - more mailing lists