lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20170817194627.12539-1-mathieu.desnoyers@efficios.com>
Date:   Thu, 17 Aug 2017 15:46:27 -0400
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrew Hunter <ahh@...gle.com>,
        Maged Michael <maged.michael@...il.com>, gromer@...gle.com,
        Avi Kivity <avi@...lladb.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Dave Watson <davejwatson@...com>
Subject: [PATCH] membarrier: Document scheduler barrier requirements

Document the membarrier requirement on having a full memory barrier in
__schedule() after coming from user-space, before storing to rq->curr.
It is provided by smp_mb__before_spinlock() in __schedule().

Document that membarrier requires a full barrier on transition from
kernel thread to userspace thread. We currently have an implicit barrier
from atomic_dec_and_test() in mmdrop() that ensures this.

The x86 switch_mm_irqs_off() full barrier is currently provided by many
cpumask update operations as well as load_cr3(). However, since
load_cr3() may be done lazily in the future, move the documentation from
load_cr3() to cpumask_set_cpu().

[ Applies on top of "membarrier: expedited private command (v4)". ]

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
CC: Boqun Feng <boqun.feng@...il.com>
CC: Andrew Hunter <ahh@...gle.com>
CC: Maged Michael <maged.michael@...il.com>
CC: gromer@...gle.com
CC: Avi Kivity <avi@...lladb.com>
CC: Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: Paul Mackerras <paulus@...ba.org>
CC: Michael Ellerman <mpe@...erman.id.au>
CC: Dave Watson <davejwatson@...com>
---
 arch/x86/mm/tlb.c        | 7 +++++--
 include/linux/sched/mm.h | 4 ++++
 kernel/sched/core.c      | 9 +++++++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index bb103d693f33..a79e72691cf1 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -105,6 +105,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 	this_cpu_write(cpu_tlbstate.loaded_mm, next);
 
 	WARN_ON_ONCE(cpumask_test_cpu(cpu, mm_cpumask(next)));
+	/*
+	 * The full memory barrier implied by mm_cpumask update
+	 * operations is required by the membarrier system call.
+	 */
 	cpumask_set_cpu(cpu, mm_cpumask(next));
 
 	/*
@@ -132,8 +136,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 	 * due to instruction fetches or for no reason at all,
 	 * and neither LOCK nor MFENCE orders them.
 	 * Fortunately, load_cr3() is serializing and gives the
-	 * ordering guarantee we need. This full barrier is also
-	 * required by the membarrier system call.
+	 * ordering guarantee we need.
 	 */
 	load_cr3(next->pgd);
 
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 2b24a6974847..fe29d06e2800 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -38,6 +38,10 @@ static inline void mmgrab(struct mm_struct *mm)
 extern void __mmdrop(struct mm_struct *);
 static inline void mmdrop(struct mm_struct *mm)
 {
+	/*
+	 * The implicit full barrier implied by atomic_dec_and_test is
+	 * required by the membarrier system call.
+	 */
 	if (unlikely(atomic_dec_and_test(&mm->mm_count)))
 		__mmdrop(mm);
 }
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4f85494620d7..0e36d9960d91 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2649,6 +2649,12 @@ static struct rq *finish_task_switch(struct task_struct *prev)
 	finish_arch_post_lock_switch();
 
 	fire_sched_in_preempt_notifiers(current);
+	/*
+	 * When transitioning from a kernel thread to a userspace
+	 * thread, mmdrop()'s implicit full barrier is required by the
+	 * membarrier system call, because the current active_mm can
+	 * become the current mm without going through switch_mm().
+	 */
 	if (mm)
 		mmdrop(mm);
 	if (unlikely(prev_state == TASK_DEAD)) {
@@ -3290,6 +3296,9 @@ static void __sched notrace __schedule(bool preempt)
 	 * Make sure that signal_pending_state()->signal_pending() below
 	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
 	 * done by the caller to avoid the race with signal_wake_up().
+	 *
+	 * The membarrier system call requires a full memory barrier
+	 * after coming from user-space, before storing to rq->curr.
 	 */
 	smp_mb__before_spinlock();
 	rq_lock(rq, &rf);
-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ