lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 19 Aug 2017 22:05:46 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrew Hunter <ahh@...gle.com>,
        Maged Michael <maged.michael@...il.com>, gromer@...gle.com,
        Avi Kivity <avi@...lladb.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Dave Watson <davejwatson@...com>
Subject: Re: [PATCH v2] membarrier: Document scheduler barrier requirements

On Fri, Aug 18, 2017 at 09:39:16PM -0700, Mathieu Desnoyers wrote:
> Document the membarrier requirement on having a full memory barrier in
> __schedule() after coming from user-space, before storing to rq->curr.
> It is provided by smp_mb__before_spinlock() in __schedule().
> 
> Document that membarrier requires a full barrier on transition from
> kernel thread to userspace thread, which skips the call to switch_mm(). We
> currently have an implicit barrier from atomic_dec_and_test() in mmdrop() that
> ensures this.
> 
> The x86 switch_mm_irqs_off() full barrier is currently provided by many cpumask
> update operations as well as load_cr3(). Document that load_cr3() is providing
> this barrier.
> 
> [ Rebased on top of linux-rcu for-mingo branch.
>   Applies on top of "membarrier: Provide expedited private command". ]

I have queued this for review and testing, thank you!

							Thanx, Paul

> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> CC: Peter Zijlstra <peterz@...radead.org>
> CC: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> CC: Boqun Feng <boqun.feng@...il.com>
> CC: Andrew Hunter <ahh@...gle.com>
> CC: Maged Michael <maged.michael@...il.com>
> CC: gromer@...gle.com
> CC: Avi Kivity <avi@...lladb.com>
> CC: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> CC: Paul Mackerras <paulus@...ba.org>
> CC: Michael Ellerman <mpe@...erman.id.au>
> CC: Dave Watson <davejwatson@...com>
> ---
>  arch/x86/mm/tlb.c        | 3 +++
>  include/linux/sched/mm.h | 4 ++++
>  kernel/sched/core.c      | 9 +++++++++
>  3 files changed, 16 insertions(+)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 014d07a..cd815b6 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -133,6 +133,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
>  	 * and neither LOCK nor MFENCE orders them.
>  	 * Fortunately, load_cr3() is serializing and gives the
>  	 * ordering guarantee we need.
> +	 *
> +	 * This full barrier is also required by the membarrier
> +	 * system call.
>  	 */
>  	load_cr3(next->pgd);
> 
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index 2b24a69..fe29d06 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -38,6 +38,10 @@ static inline void mmgrab(struct mm_struct *mm)
>  extern void __mmdrop(struct mm_struct *);
>  static inline void mmdrop(struct mm_struct *mm)
>  {
> +	/*
> +	 * The implicit full barrier implied by atomic_dec_and_test is
> +	 * required by the membarrier system call.
> +	 */
>  	if (unlikely(atomic_dec_and_test(&mm->mm_count)))
>  		__mmdrop(mm);
>  }
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3f29c6a..b0f199f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2654,6 +2654,12 @@ static struct rq *finish_task_switch(struct task_struct *prev)
>  	finish_arch_post_lock_switch();
> 
>  	fire_sched_in_preempt_notifiers(current);
> +	/*
> +	 * When transitioning from a kernel thread to a userspace
> +	 * thread, mmdrop()'s implicit full barrier is required by the
> +	 * membarrier system call, because the current active_mm can
> +	 * become the current mm without going through switch_mm().
> +	 */
>  	if (mm)
>  		mmdrop(mm);
>  	if (unlikely(prev_state == TASK_DEAD)) {
> @@ -3295,6 +3301,9 @@ static void __sched notrace __schedule(bool preempt)
>  	 * Make sure that signal_pending_state()->signal_pending() below
>  	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
>  	 * done by the caller to avoid the race with signal_wake_up().
> +	 *
> +	 * The membarrier system call requires a full memory barrier
> +	 * after coming from user-space, before storing to rq->curr.
>  	 */
>  	smp_mb__before_spinlock();
>  	rq_lock(rq, &rf);
> -- 
> 1.9.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ