[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160413125243.GA6810@worktop.ger.corp.intel.com>
Date: Wed, 13 Apr 2016 14:52:43 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org, waiman.long@....com,
mingo@...hat.com, paulmck@...ux.vnet.ibm.com, boqun.feng@...il.com,
torvalds@...ux-foundation.org, dave@...olabs.net
Subject: Re: [RFC][PATCH 3/3] locking,arm64: Introduce cmpwait()
On Tue, Apr 12, 2016 at 05:59:41PM +0100, Will Deacon wrote:
> Thanks for looking at this!
n/p, had to se what it would look like etc.. :-)
> > I've misplaced my arm64 compiler, so this is not even compile tested.
>
> Guess what? ;)
>
> make: *** [init] Error 2
> make: *** Waiting for unfinished jobs....
>
> (and lot of similar errors).
>
> Looks like you're just missing an #undef in cmpxchg.h.
>
> FWIW, you can pick up arm64 toolchain binaries from:
>
> https://releases.linaro.org/components/toolchain/binaries/latest-5/
Ah, I usually build a whole set from sources; for some reason arm64
didn't build in the latest run. I'll have to kick it.
> > +#define __CMPWAIT_GEN(w, sz, name) \
> > +void __cmpwait_case_##name(volatile void *ptr, unsigned long val) \
> > +{ \
> > + unsigned long tmp; \
> > + \
> > + asm volatile( \
> > + " ldxr" #sz "\t%" #w "[tmp], %[v]\n" \
> > + " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \
> > + " cbnz %" #w "[tmp], 1f\n" \
>
> Shouldn't this be cbz? (i.e. branch over the wfe if the value is equal
> to what we wanted?).
Indeed so.
> > + " wfe\n" \
> > + "1:" \
> > + : [tmp] "=&r" (tmp), [val] "=&r" (val), \
>
> We only read val, so it can be an input operand, no?
True.. :-)
> > +#define cmpwait(ptr, val) \
> > + __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
>
> We might want to call this cmpwait_relaxed, in case we decide to add
> fenced versions in the future. Or just make it cmpwait_acquire and
> remove the smp_rmb() from smp_cond_load_acquire(). Dunno.
This is something I'll very much leave up to you. I have no idea on the
tradeoffs involved here.
Powered by blists - more mailing lists