lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 18 Sep 2017 10:07:52 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: mathieu.desnoyers@...icios.com Cc: linux-kernel@...r.kernel.org, peterz@...radead.org, boqun.feng@...il.com, ahh@...gle.com, maged.michael@...il.com, gromer@...gle.com, avi@...lladb.com, benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au, davejwatson@...com Subject: sys_membarrier() scheduler barrier requirements: dropped Hello! Commit 3ed668659e95 ("membarrier: Document scheduler barrier requirements") did not make it into the v4.14 merge window, and rebasing to v4.14-rc1 results in conflicts. I have therefore dropped it. If someone would be willing to forward-port it, I would be quite happy to pull it back in for the v4.15 merge window. Or potentially as part of the fix to sys_membarrier() for 4.14, for that matter. Thanx, Paul ------------------------------------------------------------------------ commit 3ed668659e95ecfb6f6be0a3e7ff0fa6d27b2f5c Author: Mathieu Desnoyers <mathieu.desnoyers@...icios.com> Date: Fri Aug 18 21:39:16 2017 -0700 membarrier: Document scheduler barrier requirements Document the membarrier requirement on having a full memory barrier in __schedule() after coming from user-space, before storing to rq->curr. It is provided by smp_mb__before_spinlock() in __schedule(). Document that membarrier requires a full barrier on transition from kernel thread to userspace thread, which skips the call to switch_mm(). We currently have an implicit barrier from atomic_dec_and_test() in mmdrop() that ensures this. The x86 switch_mm_irqs_off() full barrier is currently provided by many cpumask update operations as well as load_cr3(). Document that load_cr3() is providing this barrier. [ Rebased on top of linux-rcu for-mingo branch. Applies on top of "membarrier: Provide expedited private command". ] Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com> CC: Peter Zijlstra <peterz@...radead.org> CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> CC: Boqun Feng <boqun.feng@...il.com> CC: Andrew Hunter <ahh@...gle.com> CC: Maged Michael <maged.michael@...il.com> CC: gromer@...gle.com CC: Avi Kivity <avi@...lladb.com> CC: Benjamin Herrenschmidt <benh@...nel.crashing.org> CC: Paul Mackerras <paulus@...ba.org> CC: Michael Ellerman <mpe@...erman.id.au> CC: Dave Watson <davejwatson@...com> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 014d07a80053..cd815b63420a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -133,6 +133,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * and neither LOCK nor MFENCE orders them. * Fortunately, load_cr3() is serializing and gives the * ordering guarantee we need. + * + * This full barrier is also required by the membarrier + * system call. */ load_cr3(next->pgd); diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 2b24a6974847..fe29d06e2800 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -38,6 +38,10 @@ static inline void mmgrab(struct mm_struct *mm) extern void __mmdrop(struct mm_struct *); static inline void mmdrop(struct mm_struct *mm) { + /* + * The implicit full barrier implied by atomic_dec_and_test is + * required by the membarrier system call. + */ if (unlikely(atomic_dec_and_test(&mm->mm_count))) __mmdrop(mm); } diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3f29c6a89d80..b0f199f9ec62 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2654,6 +2654,12 @@ static struct rq *finish_task_switch(struct task_struct *prev) finish_arch_post_lock_switch(); fire_sched_in_preempt_notifiers(current); + /* + * When transitioning from a kernel thread to a userspace + * thread, mmdrop()'s implicit full barrier is required by the + * membarrier system call, because the current active_mm can + * become the current mm without going through switch_mm(). + */ if (mm) mmdrop(mm); if (unlikely(prev_state == TASK_DEAD)) { @@ -3295,6 +3301,9 @@ static void __sched notrace __schedule(bool preempt) * Make sure that signal_pending_state()->signal_pending() below * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) * done by the caller to avoid the race with signal_wake_up(). + * + * The membarrier system call requires a full memory barrier + * after coming from user-space, before storing to rq->curr. */ smp_mb__before_spinlock(); rq_lock(rq, &rf);
Powered by blists - more mailing lists