[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180412192800.15708-10-mathieu.desnoyers@efficios.com>
Date: Thu, 12 Apr 2018 15:27:46 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Peter Zijlstra <peterz@...radead.org>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Andy Lutomirski <luto@...capital.net>,
Dave Watson <davejwatson@...com>
Cc: linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
Paul Turner <pjt@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Russell King <linux@....linux.org.uk>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>, Andrew Hunter <ahh@...gle.com>,
Andi Kleen <andi@...stfloor.org>, Chris Lameter <cl@...ux.com>,
Ben Maurer <bmaurer@...com>,
Steven Rostedt <rostedt@...dmis.org>,
Josh Triplett <josh@...htriplett.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Michael Kerrisk <mtk.manpages@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: [RFC PATCH for 4.18 09/23] sched: Implement push_task_to_cpu (v2)
Implement push_task_to_cpu(), which moves the task received as argument
to the destination cpu's runqueue. It only does so if the CPU is within
the CPU allowed mask of the task and if the CPU is active. If the CPU is
not part of the allowed mask, -EINVAL is returned. If the CPU is not
active, -EBUSY is returned.
It does not change the CPU allowed mask, and can therefore be used
within applications which rely on owning the sched_setaffinity() state.
It does not pin the task to the destination CPU, which means that the
scheduler may choose to move the task away from that CPU before the
task executes. Code invoking push_task_to_cpu() must be prepared to
retry in that case.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Paul Turner <pjt@...gle.com>
CC: Thomas Gleixner <tglx@...utronix.de>
CC: Andrew Hunter <ahh@...gle.com>
CC: Andy Lutomirski <luto@...capital.net>
CC: Andi Kleen <andi@...stfloor.org>
CC: Dave Watson <davejwatson@...com>
CC: Chris Lameter <cl@...ux.com>
CC: Ingo Molnar <mingo@...hat.com>
CC: "H. Peter Anvin" <hpa@...or.com>
CC: Ben Maurer <bmaurer@...com>
CC: Steven Rostedt <rostedt@...dmis.org>
CC: Josh Triplett <josh@...htriplett.org>
CC: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Andrew Morton <akpm@...ux-foundation.org>
CC: Russell King <linux@....linux.org.uk>
CC: Catalin Marinas <catalin.marinas@....com>
CC: Will Deacon <will.deacon@....com>
CC: Michael Kerrisk <mtk.manpages@...il.com>
CC: Boqun Feng <boqun.feng@...il.com>
CC: linux-api@...r.kernel.org
---
Change since v1:
- Return -EBUSY if CPU is not active.
---
kernel/sched/core.c | 42 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/sched.h | 9 +++++++++
2 files changed, 51 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8e8bd91b9bd7..d65f227b0f18 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1062,6 +1062,48 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
set_curr_task(rq, p);
}
+int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu)
+{
+ struct rq_flags rf;
+ struct rq *rq;
+ int ret = 0;
+
+ rq = task_rq_lock(p, &rf);
+ update_rq_clock(rq);
+
+ if (!cpumask_test_cpu(dest_cpu, &p->cpus_allowed)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (!cpumask_test_cpu(dest_cpu, cpu_active_mask)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ if (task_cpu(p) == dest_cpu)
+ goto out;
+
+ if (task_running(rq, p) || p->state == TASK_WAKING) {
+ struct migration_arg arg = { p, dest_cpu };
+ /* Need help from migration thread: drop lock and wait. */
+ task_rq_unlock(rq, p, &rf);
+ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
+ tlb_migrate_finish(p->mm);
+ return 0;
+ } else if (task_on_rq_queued(p)) {
+ /*
+ * OK, since we're going to drop the lock immediately
+ * afterwards anyway.
+ */
+ rq = move_queued_task(rq, &rf, p, dest_cpu);
+ }
+out:
+ task_rq_unlock(rq, p, &rf);
+
+ return ret;
+}
+
/*
* Change a given task's CPU affinity. Migrate the thread to a
* proper CPU and schedule it away if the CPU it's executing on
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fb5fc458547f..43fdfcb0738a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1251,6 +1251,15 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
#endif
}
+#ifdef CONFIG_SMP
+int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu);
+#else
+static inline int push_task_to_cpu(struct task_struct *p, unsigned int dest_cpu)
+{
+ return 0;
+}
+#endif
+
/*
* Tunables that become constants when CONFIG_SCHED_DEBUG is off:
*/
--
2.11.0
Powered by blists - more mailing lists