lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 1 Apr 2016 17:47:43 +0100
From:	Will Deacon <will.deacon@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Waiman Long <waiman.long@....com>, Ingo Molnar <mingo@...hat.com>,
	linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>
Subject: Re: [PATCH] locking/qrwlock: Allow multiple spinning readers

On Fri, Apr 01, 2016 at 01:43:03PM +0200, Peter Zijlstra wrote:
> > Ah, yes, I forgot about that. Lemme go find that discussions and see
> > what I can do there.
> 
> Completely untested..
> 
> ---
> include/linux/compiler.h   | 20 ++++++++++++++------
> kernel/locking/qspinlock.c | 12 ++++++------
> kernel/sched/core.c        |  9 +++++----
> kernel/sched/sched.h       |  2 +-
> kernel/smp.c               |  2 +-
> 5 files changed, 27 insertions(+), 18 deletions(-)
> 
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index b5ff9881bef8..c64f5897664f 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -305,7 +305,8 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
> })
> 
> /**
> - * smp_cond_acquire() - Spin wait for cond with ACQUIRE ordering
> + * smp_cond_load_acquire() - Spin wait for cond with ACQUIRE ordering
> + * @ptr:  pointer to the variable wait on
>  * @cond: boolean expression to wait for
>  *
>  * Equivalent to using smp_load_acquire() on the condition variable but employs
> @@ -315,11 +316,18 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>  * provides LOAD->LOAD order, together they provide LOAD->{LOAD,STORE} order,
>  * aka. ACQUIRE.
>  */
> -#define smp_cond_acquire(cond)	do {		\
> -	while (!(cond))				\
> -		cpu_relax();			\
> -	smp_rmb(); /* ctrl + rmb := acquire */	\
> -} while (0)
> +#define smp_cond_load_acquire(ptr, cond_expr)	({		\
> +	typeof(ptr) __PTR = (ptr);				\
> +	typeof(*ptr) VAL;					\

It's a bit grim having a magic variable name, but I have no better
suggestion.

> +	for (;;) {						\
> +		VAL = READ_ONCE(*__PTR);			\
> +		if (cond_expr)					\
> +			break;					\
> +		cpu_relax();					\
> +	}							\
> +	smp_rmb(); /* ctrl + rmb := acquire */			\
> +	VAL;							\
> +})

Can you stick some #ifndef guards around this, please? That way I can do
my ldxr/wfe-based version for ARM that makes the spinning tolerable. Also,
wouldn't this be better suited to barrier.h?

Otherwise, I really like this idea. Thanks!

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ