lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 18 Sep 2017 11:31:09 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com> Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>, Boqun Feng <boqun.feng@...il.com>, Andrew Hunter <ahh@...gle.com>, Maged Michael <maged.michael@...il.com>, gromer@...gle.com, Avi Kivity <avi@...lladb.com>, Benjamin Herrenschmidt <benh@...nel.crashing.org>, Paul Mackerras <paulus@...ba.org>, Michael Ellerman <mpe@...erman.id.au>, Dave Watson <davejwatson@...com> Subject: Re: [PATCH] membarrier: Document scheduler barrier requirements On Mon, Sep 18, 2017 at 02:01:22PM -0400, Mathieu Desnoyers wrote: > Document the membarrier requirement on having a full memory barrier in > __schedule() after coming from user-space, before storing to rq->curr. > It is provided by smp_mb__before_spinlock() in __schedule(). > > Document that membarrier requires a full barrier on transition from > kernel thread to userspace thread. We currently have an implicit barrier > from atomic_dec_and_test() in mmdrop() that ensures this. > > The x86 switch_mm_irqs_off() full barrier is currently provided by many > cpumask update operations as well as write_cr3(). Document that > write_cr3() provides this barrier. > > Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com> Queued for review, thank you Mathieu! Thanx, Paul > CC: Peter Zijlstra <peterz@...radead.org> > CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> > CC: Boqun Feng <boqun.feng@...il.com> > CC: Andrew Hunter <ahh@...gle.com> > CC: Maged Michael <maged.michael@...il.com> > CC: gromer@...gle.com > CC: Avi Kivity <avi@...lladb.com> > CC: Benjamin Herrenschmidt <benh@...nel.crashing.org> > CC: Paul Mackerras <paulus@...ba.org> > CC: Michael Ellerman <mpe@...erman.id.au> > CC: Dave Watson <davejwatson@...com> > --- > arch/x86/mm/tlb.c | 5 +++++ > include/linux/sched/mm.h | 4 ++++ > kernel/sched/core.c | 9 +++++++++ > 3 files changed, 18 insertions(+) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 1ab3821f9e26..fa3bbe048af0 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -144,6 +144,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > } > #endif > > + /* > + * The membarrier system call requires a full memory barrier > + * after coming from user-space, before storing to rq->curr. > + * Writing to CR3 provides that full memory barrier. > + */ > if (real_prev == next) { > VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > next->context.ctx_id); > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index df4005e2c4cf..f3bc261fe7c7 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -38,6 +38,10 @@ static inline void mmgrab(struct mm_struct *mm) > extern void __mmdrop(struct mm_struct *); > static inline void mmdrop(struct mm_struct *mm) > { > + /* > + * The implicit full barrier implied by atomic_dec_and_test is > + * required by the membarrier system call. > + */ > if (unlikely(atomic_dec_and_test(&mm->mm_count))) > __mmdrop(mm); > } > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index c5c1b2c51807..48d524b18868 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2648,6 +2648,12 @@ static struct rq *finish_task_switch(struct task_struct *prev) > finish_arch_post_lock_switch(); > > fire_sched_in_preempt_notifiers(current); > + /* > + * When transitioning from a kernel thread to a userspace > + * thread, mmdrop()'s implicit full barrier is required by the > + * membarrier system call, because the current active_mm can > + * become the current mm without going through switch_mm(). > + */ > if (mm) > mmdrop(mm); > if (unlikely(prev_state == TASK_DEAD)) { > @@ -3289,6 +3295,9 @@ static void __sched notrace __schedule(bool preempt) > * Make sure that signal_pending_state()->signal_pending() below > * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) > * done by the caller to avoid the race with signal_wake_up(). > + * > + * The membarrier system call requires a full memory barrier > + * after coming from user-space, before storing to rq->curr. > */ > rq_lock(rq, &rf); > smp_mb__after_spinlock(); > -- > 2.11.0 >
Powered by blists - more mailing lists