[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170315231827.GA13656@htj.duckdns.org>
Date: Wed, 15 Mar 2017 19:18:27 -0400
From: Tejun Heo <tj@...nel.org>
To: Oleg Nesterov <oleg@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Chris Mason <clm@...com>,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: [PATCH 1/2] kthread: add barriers to set_kthread_struct() and
to_kthread()
Until now, all to_kthread() users are interlocked with kthread
creation and there's no need to have explicit barriers when setting
the kthread pointer or dereferencing it.
However, There is a race condition where userland can interfere with a
kthread while it's being initialized. To close it, to_kthread() needs
to be used from an unsynchronized context.
This patch moves struct kthread initialization before
set_kthread_struct() and adds matching barriers in
set_kthread_struct() and to_kthread(), so that dereferencing
to_kthread() always returns initialized fields.
Signed-off-by: Tejun Heo <tj@...nel.org>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Chris Mason <clm@...com>
Cc: stable@...r.kernel.org # v4.3+ (we can't close the race < v4.3)
---
kernel/kthread.c | 30 +++++++++++++++++++++++-------
1 file changed, 23 insertions(+), 7 deletions(-)
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -57,6 +57,9 @@ enum KTHREAD_BITS {
static inline void set_kthread_struct(void *kthread)
{
+ /* paired with smp_read_data_barrier_depends() in to_kthread() */
+ smp_wmb();
+
/*
* We abuse ->set_child_tid to avoid the new member and because it
* can't be wrongly copied by copy_process(). We also rely on fact
@@ -67,8 +70,19 @@ static inline void set_kthread_struct(vo
static inline struct kthread *to_kthread(struct task_struct *k)
{
+ void *ptr;
+
WARN_ON(!(k->flags & PF_KTHREAD));
- return (__force void *)k->set_child_tid;
+
+ ptr = (__force void *)k->set_child_tid;
+
+ /*
+ * Paired with smp_wmb() in set_kthread_struct() and ensures that
+ * the caller sees initialized content of the returned kthread.
+ */
+ smp_read_barrier_depends();
+
+ return ptr;
}
void free_kthread_struct(struct task_struct *k)
@@ -196,6 +210,14 @@ static int kthread(void *_create)
int ret;
self = kmalloc(sizeof(*self), GFP_KERNEL);
+ if (self) {
+ self->flags = 0;
+ self->data = data;
+ init_completion(&self->exited);
+ init_completion(&self->parked);
+ current->vfork_done = &self->exited;
+ }
+
set_kthread_struct(self);
/* If user was SIGKILLed, I release the structure. */
@@ -211,12 +233,6 @@ static int kthread(void *_create)
do_exit(-ENOMEM);
}
- self->flags = 0;
- self->data = data;
- init_completion(&self->exited);
- init_completion(&self->parked);
- current->vfork_done = &self->exited;
-
/* OK, tell user we're spawned, wait for stop or wakeup */
__set_current_state(TASK_UNINTERRUPTIBLE);
create->result = current;
Powered by blists - more mailing lists