[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200702093239.GA15391@C02TD0UTHF1T.local>
Date: Thu, 2 Jul 2020 10:32:39 +0100
From: Mark Rutland <mark.rutland@....com>
To: Will Deacon <will@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Sami Tolvanen <samitolvanen@...gle.com>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Kees Cook <keescook@...omium.org>,
Marco Elver <elver@...gle.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Matt Turner <mattst88@...il.com>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>,
Richard Henderson <rth@...ddle.net>,
Peter Zijlstra <peterz@...radead.org>,
Alan Stern <stern@...land.harvard.edu>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Arnd Bergmann <arnd@...db.de>,
Boqun Feng <boqun.feng@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
linux-arm-kernel@...ts.infradead.org, linux-alpha@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kernel-team@...roid.com
Subject: Re: [PATCH 04/18] alpha: Override READ_ONCE() with barriered
implementation
On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> Rather then relying on the core code to use smp_read_barrier_depends()
> as part of the READ_ONCE() definition, instead override __READ_ONCE()
> in the Alpha code so that it is treated the same way as
> smp_load_acquire().
>
> Acked-by: Paul E. McKenney <paulmck@...nel.org>
> Signed-off-by: Will Deacon <will@...nel.org>
> ---
> arch/alpha/include/asm/barrier.h | 61 ++++----------------------------
> arch/alpha/include/asm/rwonce.h | 19 ++++++++++
> 2 files changed, 26 insertions(+), 54 deletions(-)
> create mode 100644 arch/alpha/include/asm/rwonce.h
>
> diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h
> index 92ec486a4f9e..2ecd068d91d1 100644
> --- a/arch/alpha/include/asm/barrier.h
> +++ b/arch/alpha/include/asm/barrier.h
> @@ -2,64 +2,17 @@
> #ifndef __BARRIER_H
> #define __BARRIER_H
>
> -#include <asm/compiler.h>
> -
> #define mb() __asm__ __volatile__("mb": : :"memory")
> #define rmb() __asm__ __volatile__("mb": : :"memory")
> #define wmb() __asm__ __volatile__("wmb": : :"memory")
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p) \
> +({ \
> + __unqual_scalar_typeof(*p) ___p1 = \
> + (*(volatile typeof(___p1) *)(p)); \
> + compiletime_assert_atomic_type(*p); \
> + ___p1; \
> +})
Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to provide the acquire semantic?
IIUC prior to this commit alpha would have used the asm-generic
__smp_load_acquire, i.e.
| #ifndef __smp_load_acquire
| #define __smp_load_acquire(p) \
| ({ \
| __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p); \
| compiletime_assert_atomic_type(*p); \
| __smp_mb(); \
| (typeof(*p))___p1; \
| })
| #endif
... where the __smp_mb() would be alpha's mb() from earlier in the patch
context, i.e.
| #define mb() __asm__ __volatile__("mb": : :"memory")
... so don't we need similar before returning ___p1 above in
__smp_load_acquire() (and also matching the old read_barrier_depends())?
[...]
> +#include <asm/barrier.h>
> +
> +/*
> + * Alpha is apparently daft enough to reorder address-dependent loads
> + * on some CPU implementations. Knock some common sense into it with
> + * a memory barrier in READ_ONCE().
> + */
> +#define __READ_ONCE(x) __smp_load_acquire(&(x))
As above, I don't see a memory barrier implied here, so this doesn't
look quite right.
Thanks,
Mark.
Powered by blists - more mailing lists