lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 22 Sep 2017 15:26:38 +0000 (UTC)
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Boqun Feng <boqun.feng@...il.com>,
        Andrew Hunter <ahh@...gle.com>,
        maged michael <maged.michael@...il.com>,
        gromer <gromer@...gle.com>, Avi Kivity <avi@...lladb.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Dave Watson <davejwatson@...com>
Subject: Re: [PATCH v3] membarrier: Document scheduler barrier requirements

----- On Sep 21, 2017, at 8:25 AM, Peter Zijlstra peterz@...radead.org wrote:

> On Tue, Sep 19, 2017 at 06:02:05PM -0400, Mathieu Desnoyers wrote:
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index 1ab3821f9e26..74f94fe4aded 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -144,6 +144,11 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct
>> mm_struct *next,
>>  	}
>>  #endif
>>  
>> +	/*
>> +	 * The membarrier system call requires a full memory barrier
>> +	 * before returning to user-space, after storing to rq->curr.
>> +	 * Writing to CR3 provides that full memory barrier.
>> +	 */
>>  	if (real_prev == next) {
>>  		VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
>>  			  next->context.ctx_id);
>> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
>> index 3a19c253bdb1..766cc47c4d7c 100644
>> --- a/include/linux/sched/mm.h
>> +++ b/include/linux/sched/mm.h
>> @@ -38,6 +38,11 @@ static inline void mmgrab(struct mm_struct *mm)
>>  extern void __mmdrop(struct mm_struct *);
>>  static inline void mmdrop(struct mm_struct *mm)
>>  {
>> +	/*
>> +	 * The implicit full barrier implied by atomic_dec_and_test is
>> +	 * required by the membarrier system call before returning to
>> +	 * user-space, after storing to rq->curr.
>> +	 */
>>  	if (unlikely(atomic_dec_and_test(&mm->mm_count)))
>>  		__mmdrop(mm);
>>  }
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 18a6966567da..7977b25acf54 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2658,6 +2658,12 @@ static struct rq *finish_task_switch(struct task_struct
>> *prev)
>>  	finish_arch_post_lock_switch();
>>  
>>  	fire_sched_in_preempt_notifiers(current);
>> +	/*
>> +	 * When transitioning from a kernel thread to a userspace
>> +	 * thread, mmdrop()'s implicit full barrier is required by the
>> +	 * membarrier system call, because the current active_mm can
>> +	 * become the current mm without going through switch_mm().
>> +	 */
>>  	if (mm)
>>  		mmdrop(mm);
>>  	if (unlikely(prev_state == TASK_DEAD)) {
> 
> 
> I would also put a comment in context_switch() that explains we either
> pass through switch_mm() or do mmdrop().
> 
> And I think that for the weak archs that don't have native RELEASE we
> actually rely on rq_unlock() for the smp_mb().
> 
> So there's 4 schemes:
> 
> - switch_mm()/mmdrop() (x86,s390, sparc?)
> - finish_lock_switch() (weak, !release)
> - switch_to (arm64)
> - member arch hook (ppc)
> 
> And I don't think that's spelled out clearly enough.
> 
>> @@ -3299,6 +3305,9 @@ static void __sched notrace __schedule(bool preempt)
>>  	 * Make sure that signal_pending_state()->signal_pending() below
>>  	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
>>  	 * done by the caller to avoid the race with signal_wake_up().
>> +	 *
>> +	 * The membarrier system call requires a full memory barrier
>> +	 * after coming from user-space, before storing to rq->curr.
>>  	 */
>>  	rq_lock(rq, &rf);
>>  	smp_mb__after_spinlock();
> 
> Right, this is the only part that's actually trivial :-)

Does something like this work ? (except for tabs vs spaces)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 08095bb1cfe6..6254f87645de 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2760,6 +2760,13 @@ context_switch(struct rq *rq, struct task_struct *prev,
         */
        arch_start_context_switch(prev);
 
+       /*
+        * If mm is non-NULL, we pass through switch_mm(). If mm is
+        * NULL, we will pass through mmdrop() in finish_task_switch().
+        * Both of these contain the full memory barrier required by
+        * membarrier after storing to rq->curr, before returning to
+        * user-space.
+        */
        if (!mm) {
                next->active_mm = oldmm;
                mmgrab(oldmm);
@@ -3346,16 +3353,17 @@ static void __sched notrace __schedule(bool preempt)
                /*
                 * The membarrier system call requires each architecture
                 * to have a full memory barrier after updating
-                * rq->curr, before returning to user-space. For TSO
-                * (e.g. x86), the architecture must provide its own
-                * barrier in switch_mm(). For weakly ordered machines
-                * for which spin_unlock() acts as a full memory
-                * barrier, finish_lock_switch() in common code takes
-                * care of this barrier. For weakly ordered machines for
-                * which spin_unlock() acts as a RELEASE barrier (only
-                * arm64 and PowerPC), arm64 has a full barrier in
-                * switch_to(), and PowerPC has a full barrier in
-                * membarrier_arch_sched_in().
+                * rq->curr, before returning to user-space.
+                *
+                * Here are the schemes providing that barrier on the
+                * various architectures:
+                * - mm ? switch_mm() : mmdrop() for x86, s390, sparc,
+                * - finish_lock_switch() for weakly-ordered
+                *   architectures where spin_unlock is a full barrier,
+                * - switch_to() for arm64 (weakly-ordered, spin_unlock
+                *   is a RELEASE barrier),
+                * - membarrier_arch_sched_in() for PowerPC,
+                *   (weakly-ordered, spin_unlock is a RELEASE barrier).
                 */
                ++*switch_count;

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ