[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210709220017.813653831@goodmis.org>
Date: Fri, 09 Jul 2021 17:59:57 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org,
linux-rt-users <linux-rt-users@...r.kernel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Carsten Emde <C.Emde@...dl.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
John Kacur <jkacur@...hat.com>, Daniel Wagner <wagi@...om.org>,
Tom Zanussi <zanussi@...nel.org>,
"Srivatsa S. Bhat" <srivatsa@...il.mit.edu>, stable@...nel.org,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Valentin Schneider <valentin.schneider@....com>,
Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: [PATCH RT 4/8] sched: Optimize migration_cpu_stop()
5.10.47-rt46-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz@...radead.org>
commit 3f1bc119cd7fc987c8ed25ffb717f99403bb308c upstream.
When the purpose of migration_cpu_stop() is to migrate the task to
'any' valid CPU, don't migrate the task when it's already running on a
valid CPU.
Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
Cc: stable@...nel.org
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
Link: https://lkml.kernel.org/r/20210224131355.569238629@infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@...driver.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
kernel/sched/core.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6880c300c624..9cbe12d8c5bd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1972,14 +1972,25 @@ static int migration_cpu_stop(void *data)
complete = true;
}
- if (dest_cpu < 0)
+ if (dest_cpu < 0) {
+ if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
+ goto out;
+
dest_cpu = cpumask_any_distribute(&p->cpus_mask);
+ }
if (task_on_rq_queued(p))
rq = __migrate_task(rq, &rf, p, dest_cpu);
else
p->wake_cpu = dest_cpu;
+ /*
+ * XXX __migrate_task() can fail, at which point we might end
+ * up running on a dodgy CPU, AFAICT this can only happen
+ * during CPU hotplug, at which point we'll get pushed out
+ * anyway, so it's probably not a big deal.
+ */
+
} else if (pending) {
/*
* This happens when we get migrated between migrate_enable()'s
--
2.30.2
Powered by blists - more mailing lists