[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 4 Apr 2016 15:12:05 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: linux-kernel@...r.kernel.org, will.deacon@....com
Cc: waiman.long@....com, mingo@...hat.com, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, torvalds@...ux-foundation.org,
dave@...olabs.net
Subject: Re: [RFC][PATCH 3/3] locking,arm64: Introduce cmpwait()
On Mon, Apr 04, 2016 at 02:22:53PM +0200, Peter Zijlstra wrote:
> +#define __CMPWAIT_GEN(w, sz, name) \
+static inline \
> +void __cmpwait_case_##name(volatile void *ptr, unsigned long val) \
> +{ \
> + unsigned long tmp; \
> + \
> + asm volatile( \
> + " ldxr" #sz "\t%" #w "[tmp], %[v]\n" \
> + " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \
> + " cbnz %" #w "[tmp], 1f\n" \
> + " wfe\n" \
> + "1:" \
> + : [tmp] "=&r" (tmp), [val] "=&r" (val), \
> + [v] "+Q" (*(unsigned long *)ptr)); \
And this probably wants a "memory" clobber to force reload values after
this returns.
> +}
Powered by blists - more mailing lists