[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200110174506.GK33536@lakrids.cambridge.arm.com>
Date: Fri, 10 Jan 2020 17:45:07 +0000
From: Mark Rutland <mark.rutland@....com>
To: Will Deacon <will@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
kernel-team@...roid.com, Michael Ellerman <mpe@...erman.id.au>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Segher Boessenkool <segher@...nel.crashing.org>,
Christian Borntraeger <borntraeger@...ibm.com>,
Luc Van Oostenryck <luc.vanoostenryck@...il.com>,
Arnd Bergmann <arnd@...db.de>
Subject: Re: [RFC PATCH 8/8] arm64: barrier: Use '__unqual_scalar_typeof' for
acquire/release macros
On Fri, Jan 10, 2020 at 04:56:36PM +0000, Will Deacon wrote:
> Passing volatile-qualified pointers to the arm64 implementations of the
> load-acquire/store-release macros results in a re-load from the stack
> and a bunch of associated stack-protector churn due to the temporary
> result variable inheriting the volatile semantics thanks to the use of
> 'typeof()'.
>
> Define these temporary variables using 'unqual_scalar_typeof' to drop
> the volatile qualifier in the case that they are scalar types.
>
> Cc: Mark Rutland <mark.rutland@....com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Arnd Bergmann <arnd@...db.de>
> Signed-off-by: Will Deacon <will@...nel.org>
Based on my understanding of __unqual_scalar_typeof(), these changes
look sound to me:
Acked-by: Mark Rutland <mark.rutland@....com>
Mark.
> ---
> arch/arm64/include/asm/barrier.h | 16 ++++++++--------
> 1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 7d9cc5ec4971..fb4c27506ef4 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -76,8 +76,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
> #define __smp_store_release(p, v) \
> do { \
> typeof(p) __p = (p); \
> - union { typeof(*p) __val; char __c[1]; } __u = \
> - { .__val = (__force typeof(*p)) (v) }; \
> + union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u = \
> + { .__val = (__force __unqual_scalar_typeof(*p)) (v) }; \
> compiletime_assert_atomic_type(*p); \
> kasan_check_write(__p, sizeof(*p)); \
> switch (sizeof(*p)) { \
> @@ -110,7 +110,7 @@ do { \
>
> #define __smp_load_acquire(p) \
> ({ \
> - union { typeof(*p) __val; char __c[1]; } __u; \
> + union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u; \
> typeof(p) __p = (p); \
> compiletime_assert_atomic_type(*p); \
> kasan_check_read(__p, sizeof(*p)); \
> @@ -136,33 +136,33 @@ do { \
> : "Q" (*__p) : "memory"); \
> break; \
> } \
> - __u.__val; \
> + (typeof(*p))__u.__val; \
> })
>
> #define smp_cond_load_relaxed(ptr, cond_expr) \
> ({ \
> typeof(ptr) __PTR = (ptr); \
> - typeof(*ptr) VAL; \
> + __unqual_scalar_typeof(*ptr) VAL; \
> for (;;) { \
> VAL = READ_ONCE(*__PTR); \
> if (cond_expr) \
> break; \
> __cmpwait_relaxed(__PTR, VAL); \
> } \
> - VAL; \
> + (typeof(*ptr))VAL; \
> })
>
> #define smp_cond_load_acquire(ptr, cond_expr) \
> ({ \
> typeof(ptr) __PTR = (ptr); \
> - typeof(*ptr) VAL; \
> + __unqual_scalar_typeof(*ptr) VAL; \
> for (;;) { \
> VAL = smp_load_acquire(__PTR); \
> if (cond_expr) \
> break; \
> __cmpwait_relaxed(__PTR, VAL); \
> } \
> - VAL; \
> + (typeof(*ptr))VAL; \
> })
>
> #include <asm-generic/barrier.h>
> --
> 2.25.0.rc1.283.g88dfdc4193-goog
>
Powered by blists - more mailing lists