[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160420103059.GX3408@twins.programming.kicks-ass.net>
Date: Wed, 20 Apr 2016 12:30:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jason Low <jason.low2@...com>
Cc: Will Deacon <will.deacon@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
terry.rudd@....com, "Long, Wai Man" <waiman.long@....com>,
"boqun.feng@...il.com" <boqun.feng@...il.com>,
"dave@...olabs.net" <dave@...olabs.net>
Subject: Re: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks
On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote:
> Use WFE to avoid most spinning with MCS spinlocks. This is implemented
> with the new cmpwait() mechanism for comparing and waiting for the MCS
> locked value to change using LDXR + WFE.
>
> Signed-off-by: Jason Low <jason.low2@...com>
> ---
> arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
> create mode 100644 arch/arm64/include/asm/mcs_spinlock.h
>
> diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h
> new file mode 100644
> index 0000000..d295d9d
> --- /dev/null
> +++ b/arch/arm64/include/asm/mcs_spinlock.h
> @@ -0,0 +1,21 @@
> +#ifndef __ASM_MCS_SPINLOCK_H
> +#define __ASM_MCS_SPINLOCK_H
> +
> +#define arch_mcs_spin_lock_contended(l) \
> +do { \
> + int locked_val; \
> + for (;;) { \
> + locked_val = READ_ONCE(*l); \
> + if (locked_val) \
> + break; \
> + cmpwait(l, locked_val); \
> + } \
> + smp_rmb(); \
> +} while (0)
If you make the generic version use smp_cond_load_acquire() this isn't
needed.
Powered by blists - more mailing lists