[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <162971078270.25758.8552853367056272946.tip-bot2@tip-bot2>
Date: Mon, 23 Aug 2021 09:26:22 -0000
From: "tip-bot2 for Will Deacon" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Will Deacon <will@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Valentin Schneider <Valentin.Schneider@....com>,
Quentin Perret <qperret@...gle.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched: Reject CPU affinity changes based on
task_cpu_possible_mask()
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 234a503e670be01f72841be9fcf68dfb89a1fa8b
Gitweb: https://git.kernel.org/tip/234a503e670be01f72841be9fcf68dfb89a1fa8b
Author: Will Deacon <will@...nel.org>
AuthorDate: Fri, 30 Jul 2021 12:24:32 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 20 Aug 2021 12:32:59 +02:00
sched: Reject CPU affinity changes based on task_cpu_possible_mask()
Reject explicit requests to change the affinity mask of a task via
set_cpus_allowed_ptr() if the requested mask is not a subset of the
mask returned by task_cpu_possible_mask(). This ensures that the
'cpus_mask' for a given task cannot contain CPUs which are incapable of
executing it, except in cases where the affinity is forced.
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Valentin Schneider <Valentin.Schneider@....com>
Reviewed-by: Quentin Perret <qperret@...gle.com>
Link: https://lore.kernel.org/r/20210730112443.23245-6-will@kernel.org
---
kernel/sched/core.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b9d4bae..8cec0d2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2709,7 +2709,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
const struct cpumask *new_mask,
u32 flags)
{
+ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
const struct cpumask *cpu_valid_mask = cpu_active_mask;
+ bool kthread = p->flags & PF_KTHREAD;
unsigned int dest_cpu;
struct rq_flags rf;
struct rq *rq;
@@ -2718,7 +2720,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
rq = task_rq_lock(p, &rf);
update_rq_clock(rq);
- if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
+ if (kthread || is_migration_disabled(p)) {
/*
* Kernel threads are allowed on online && !active CPUs,
* however, during cpu-hot-unplug, even these might get pushed
@@ -2732,6 +2734,11 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
cpu_valid_mask = cpu_online_mask;
}
+ if (!kthread && !cpumask_subset(new_mask, cpu_allowed_mask)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
/*
* Must re-check here, to close a race against __kthread_bind(),
* sched_setaffinity() is not guaranteed to observe the flag.
Powered by blists - more mailing lists