[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200702094833.GA16248@willie-the-truck>
Date: Thu, 2 Jul 2020 10:48:33 +0100
From: Will Deacon <will@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-kernel@...r.kernel.org,
Sami Tolvanen <samitolvanen@...gle.com>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Kees Cook <keescook@...omium.org>,
Marco Elver <elver@...gle.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Matt Turner <mattst88@...il.com>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>,
Richard Henderson <rth@...ddle.net>,
Peter Zijlstra <peterz@...radead.org>,
Alan Stern <stern@...land.harvard.edu>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Arnd Bergmann <arnd@...db.de>,
Boqun Feng <boqun.feng@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
linux-arm-kernel@...ts.infradead.org, linux-alpha@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kernel-team@...roid.com
Subject: Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered
implementation
On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p) \
> > +({ \
> > + __unqual_scalar_typeof(*p) ___p1 = \
> > + (*(volatile typeof(___p1) *)(p)); \
> > + compiletime_assert_atomic_type(*p); \
> > + ___p1; \
> > +})
>
> Sorry if I'm being thick, but doesn't this need a barrier after the
> volatile access to provide the acquire semantic?
>
> IIUC prior to this commit alpha would have used the asm-generic
> __smp_load_acquire, i.e.
>
> | #ifndef __smp_load_acquire
> | #define __smp_load_acquire(p) \
> | ({ \
> | __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p); \
> | compiletime_assert_atomic_type(*p); \
> | __smp_mb(); \
> | (typeof(*p))___p1; \
> | })
> | #endif
>
> ... where the __smp_mb() would be alpha's mb() from earlier in the patch
> context, i.e.
>
> | #define mb() __asm__ __volatile__("mb": : :"memory")
>
> ... so don't we need similar before returning ___p1 above in
> __smp_load_acquire() (and also matching the old read_barrier_depends())?
>
> [...]
>
> > +#include <asm/barrier.h>
> > +
> > +/*
> > + * Alpha is apparently daft enough to reorder address-dependent loads
> > + * on some CPU implementations. Knock some common sense into it with
> > + * a memory barrier in READ_ONCE().
> > + */
> > +#define __READ_ONCE(x) __smp_load_acquire(&(x))
>
> As above, I don't see a memory barrier implied here, so this doesn't
> look quite right.
You're right, and Peter spotted the same thing off-list. I've reworked
locally so that the mb() ends up in __READ_ONCE() and __smp_load_acquire()
calles __READ_ONCE() instead of the other way round (which made more
sense before the rework in the merge window).
Will
Powered by blists - more mailing lists