[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210112144843.788106541@infradead.org>
Date: Tue, 12 Jan 2021 15:43:46 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org, tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org, jiangshanlai@...il.com,
valentin.schneider@....com, cai@...hat.com,
vincent.donnefort@....com, decui@...rosoft.com, paulmck@...nel.org,
vincent.guittot@...aro.org, rostedt@...dmis.org, axboe@...nel.dk,
tj@...nel.org, peterz@...radead.org
Subject: [PATCH 2/4] kthread: Extract KTHREAD_IS_PER_CPU
There is a need to distinguish geniune per-cpu kthreads from kthreads
that happen to have a single CPU affinity.
Geniune per-cpu kthreads are kthreads that are CPU affine for
correctness, these will obviously have PF_KTHREAD set, but must also
have PF_NO_SETAFFINITY set, lest userspace modify their affinity and
ruins things.
However, these two things are not sufficient, PF_NO_SETAFFINITY is
also set on other tasks that have their affinities controlled through
other means, like for instance workqueues.
Therefore another bit is needed; it turns out kthread_create_per_cpu()
already has such a bit: KTHREAD_IS_PER_CPU, which is used to make
kthread_park()/kthread_unpark() work correctly.
Expose this flag and remove the implicit setting of it from
kthread_create_on_cpu(); the io_uring usage of it seems dubious at
best.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Tested-by: Paul E. McKenney <paulmck@...nel.org>
---
include/linux/kthread.h | 3 +++
kernel/kthread.c | 25 ++++++++++++++++++++++++-
kernel/sched/core.c | 2 +-
kernel/sched/sched.h | 4 ++--
kernel/smpboot.c | 1 +
kernel/workqueue.c | 11 +++++++++--
6 files changed, 40 insertions(+), 6 deletions(-)
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -33,6 +33,9 @@ struct task_struct *kthread_create_on_cp
unsigned int cpu,
const char *namefmt);
+void kthread_set_per_cpu(struct task_struct *k, bool set);
+bool kthread_is_per_cpu(struct task_struct *k);
+
/**
* kthread_run - create and wake a thread.
* @threadfn: the function to run until signal_pending(current).
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -493,11 +493,34 @@ struct task_struct *kthread_create_on_cp
return p;
kthread_bind(p, cpu);
/* CPU hotplug need to bind once again when unparking the thread. */
- set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags);
to_kthread(p)->cpu = cpu;
return p;
}
+void kthread_set_per_cpu(struct task_struct *k, bool set)
+{
+ struct kthread *kthread = to_kthread(k);
+ if (!kthread)
+ return;
+
+ if (set) {
+ WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY));
+ WARN_ON_ONCE(k->nr_cpus_allowed != 1);
+ set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+ } else {
+ clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+ }
+}
+
+bool kthread_is_per_cpu(struct task_struct *k)
+{
+ struct kthread *kthread = to_kthread(k);
+ if (!kthread)
+ return false;
+
+ return test_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+}
+
/**
* kthread_unpark - unpark a thread created by kthread_create().
* @k: thread created by kthread_create().
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -188,6 +188,7 @@ __smpboot_create_thread(struct smp_hotpl
kfree(td);
return PTR_ERR(tsk);
}
+ kthread_set_per_cpu(tsk, true);
/*
* Park the thread so that it could start right on the CPU
* when it is available.
Powered by blists - more mailing lists