lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 10 Nov 2017 16:37:13 -0500
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     Boqun Feng <boqun.feng@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
        Andy Lutomirski <luto@...nel.org>,
        Andrew Hunter <ahh@...gle.com>,
        Maged Michael <maged.michael@...il.com>,
        Avi Kivity <avi@...lladb.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Dave Watson <davejwatson@...com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>,
        Andrea Parri <parri.andrea@...il.com>,
        Russell King <linux@...linux.org.uk>,
        Greg Hackmann <ghackmann@...gle.com>,
        Will Deacon <will.deacon@....com>,
        David Sehr <sehr@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        linux-arch@...r.kernel.org
Subject: [RFC PATCH for 4.15 06/10] Fix: x86: Add missing core serializing instruction on migration

x86 has a missing core serializing instruction in migration scenarios.

Given that x86-32 can return to user-space with sysexit, and x86-64
through sysretq and sysretl, which are not core serializing, the
following user-space self-modifiying code (JIT) scenario can occur:

     CPU 0                      CPU 1

User-space self-modify code
Preempted
migrated              ->
                                scheduler selects task
                                Return to user-space (iret or sysexit)
                                User-space issues sync_core()
                      <-        migrated
scheduler selects task
Return to user-space (sysexit)
jump to modified code
Run modified code without sync_core() -> bug.

This migration pattern can return to user-space through sysexit,
sysretl, or sysretq, which are not core serializing, and therefore
breaks sequential consistency expectations from a single-threaded
process.

Fix this issue by invoking sync_core_before_usermode() the first
time a runqueue finishes a task switch after receiving a migrated
thread.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Andy Lutomirski <luto@...nel.org>
CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
CC: Boqun Feng <boqun.feng@...il.com>
CC: Andrew Hunter <ahh@...gle.com>
CC: Maged Michael <maged.michael@...il.com>
CC: Avi Kivity <avi@...lladb.com>
CC: Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: Paul Mackerras <paulus@...ba.org>
CC: Michael Ellerman <mpe@...erman.id.au>
CC: Dave Watson <davejwatson@...com>
CC: Thomas Gleixner <tglx@...utronix.de>
CC: Ingo Molnar <mingo@...hat.com>
CC: "H. Peter Anvin" <hpa@...or.com>
CC: Andrea Parri <parri.andrea@...il.com>
CC: Russell King <linux@...linux.org.uk>
CC: Greg Hackmann <ghackmann@...gle.com>
CC: Will Deacon <will.deacon@....com>
CC: David Sehr <sehr@...gle.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>
CC: x86@...nel.org
CC: linux-arch@...r.kernel.org
---
 kernel/sched/core.c  | 7 +++++++
 kernel/sched/sched.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c79e94278613..4a1c9782267a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -927,6 +927,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
 
 	rq_lock(rq, rf);
 	BUG_ON(task_cpu(p) != new_cpu);
+	rq->need_sync_core = 1;
 	enqueue_task(rq, p, 0);
 	p->on_rq = TASK_ON_RQ_QUEUED;
 	check_preempt_curr(rq, p, 0);
@@ -2684,6 +2685,12 @@ static struct rq *finish_task_switch(struct task_struct *prev)
 	prev_state = prev->state;
 	vtime_task_switch(prev);
 	perf_event_task_sched_in(prev, current);
+#ifdef CONFIG_SMP
+	if (unlikely(rq->need_sync_core)) {
+		sync_core_before_usermode();
+		rq->need_sync_core = 0;
+	}
+#endif
 	finish_lock_switch(rq, prev);
 	finish_arch_post_lock_switch();
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index cab256c1720a..33e617bc491c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -734,6 +734,7 @@ struct rq {
 	/* For active balancing */
 	int active_balance;
 	int push_cpu;
+	int need_sync_core;
 	struct cpu_stop_work active_balance_work;
 	/* cpu of this runqueue: */
 	int cpu;
-- 
2.11.0

Powered by blists - more mailing lists