lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 1 Feb 2010 18:33:42 +1100
From:	Nick Piggin <npiggin@...e.de>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
	dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
	niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
	Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task
 switch at runqueue lock/unlock

On Sun, Jan 31, 2010 at 03:52:56PM -0500, Mathieu Desnoyers wrote:
> Depends on:
> "Create spin lock/spin unlock with distinct memory barrier"
> 
> A full memory barrier is wanted before and after runqueue data structure
> modifications so these can be read safely by sys_membarrier without holding the
> rq lock.
> 
> Adds no overhead on x86, because LOCK-prefixed atomic operations of the spin
> lock/unlock already imply a full memory barrier. Combines the spin lock
> acquire/release barriers with the full memory barrier to diminish the
> performance impact on other architectures. (per-architecture spinlock-mb.h
> should be gradually implemented to replace the generic version)

It does add overhead on x86, as well as most other architectures.

This really seems like the wrong optimisation to make, especially
given that there's not likely to be much using librcu yet, right?

I'd go with the simpler and safer version of sys_membarrier that does
not do tricky synchronisation or add overhead to the ctxsw fastpath.
Then if you see some actual improvement in a real program using librcu
one day we can discuss making it faster.

As it is right now, the change will definitely slow down everybody
not using librcu (ie. nearly everything).

> 
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
> CC: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> CC: Steven Rostedt <rostedt@...dmis.org>
> CC: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> CC: Nicholas Miell <nmiell@...cast.net>
> CC: Linus Torvalds <torvalds@...ux-foundation.org>
> CC: mingo@...e.hu
> CC: laijs@...fujitsu.com
> CC: dipankar@...ibm.com
> CC: akpm@...ux-foundation.org
> CC: josh@...htriplett.org
> CC: dvhltc@...ibm.com
> CC: niv@...ibm.com
> CC: tglx@...utronix.de
> CC: peterz@...radead.org
> CC: Valdis.Kletnieks@...edu
> CC: dhowells@...hat.com
> ---
>  kernel/sched.c |   24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> Index: linux-2.6-lttng/kernel/sched.c
> ===================================================================
> --- linux-2.6-lttng.orig/kernel/sched.c	2010-01-31 14:59:42.000000000 -0500
> +++ linux-2.6-lttng/kernel/sched.c	2010-01-31 15:09:51.000000000 -0500
> @@ -893,7 +893,12 @@ static inline void finish_lock_switch(st
>  	 */
>  	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
>  
> -	raw_spin_unlock_irq(&rq->lock);
> +	/*
> +	 * Order mm_cpumask and rq->curr updates before following memory
> +	 * accesses. Required by sys_membarrier().
> +	 */
> +	smp_mb__before_spin_unlock();
> +	raw_spin_unlock_irq__no_release(&rq->lock);
>  }
>  
>  #else /* __ARCH_WANT_UNLOCKED_CTXSW */
> @@ -916,10 +921,15 @@ static inline void prepare_lock_switch(s
>  	 */
>  	next->oncpu = 1;
>  #endif
> +	/*
> +	 * Order mm_cpumask and rq->curr updates before following memory
> +	 * accesses. Required by sys_membarrier().
> +	 */
> +	smp_mb__before_spin_unlock();
>  #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
> -	raw_spin_unlock_irq(&rq->lock);
> +	raw_spin_unlock_irq__no_release(&rq->lock);
>  #else
> -	raw_spin_unlock(&rq->lock);
> +	raw_spin_unlock__no_release(&rq->lock);
>  #endif
>  }
>  
> @@ -5490,7 +5500,13 @@ need_resched_nonpreemptible:
>  	if (sched_feat(HRTICK))
>  		hrtick_clear(rq);
>  
> -	raw_spin_lock_irq(&rq->lock);
> +	raw_spin_lock_irq__no_acquire(&rq->lock);
> +	/*
> +	 * Order memory accesses before mm_cpumask and rq->curr updates.
> +	 * Required by sys_membarrier() when prev != next. We only learn about
> +	 * next later, so we issue this mb() unconditionally.
> +	 */
> +	smp_mb__after_spin_lock();
>  	update_rq_clock(rq);
>  	clear_tsk_need_resched(prev);
>  
> 
> -- 
> Mathieu Desnoyers
> OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ