[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <158929421507.390.1894315072790883073.tip-bot2@tip-bot2>
Date: Tue, 12 May 2020 14:36:55 -0000
From: "tip-bot2 for Will Deacon" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Arnd Bergmann <arnd@...db.de>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: locking/kcsan] arm64: barrier: Use '__unqual_scalar_typeof' for
acquire/release macros
The following commit has been merged into the locking/kcsan branch of tip:
Commit-ID: a9e777c275422371c86a3bdc8d81f4a21ec2a63c
Gitweb: https://git.kernel.org/tip/a9e777c275422371c86a3bdc8d81f4a21ec2a63c
Author: Will Deacon <will@...nel.org>
AuthorDate: Mon, 11 May 2020 21:41:46 +01:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Tue, 12 May 2020 11:04:16 +02:00
arm64: barrier: Use '__unqual_scalar_typeof' for acquire/release macros
Passing volatile-qualified pointers to the arm64 implementations of the
load-acquire/store-release macros results in a re-load from the stack
and a bunch of associated stack-protector churn due to the temporary
result variable inheriting the volatile semantics thanks to the use of
'typeof()'.
Define these temporary variables using 'unqual_scalar_typeof' to drop
the volatile qualifier in the case that they are scalar types.
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Mark Rutland <mark.rutland@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Arnd Bergmann <arnd@...db.de>
Link: https://lkml.kernel.org/r/20200511204150.27858-15-will@kernel.org
---
arch/arm64/include/asm/barrier.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 7d9cc5e..fb4c275 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -76,8 +76,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
#define __smp_store_release(p, v) \
do { \
typeof(p) __p = (p); \
- union { typeof(*p) __val; char __c[1]; } __u = \
- { .__val = (__force typeof(*p)) (v) }; \
+ union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u = \
+ { .__val = (__force __unqual_scalar_typeof(*p)) (v) }; \
compiletime_assert_atomic_type(*p); \
kasan_check_write(__p, sizeof(*p)); \
switch (sizeof(*p)) { \
@@ -110,7 +110,7 @@ do { \
#define __smp_load_acquire(p) \
({ \
- union { typeof(*p) __val; char __c[1]; } __u; \
+ union { __unqual_scalar_typeof(*p) __val; char __c[1]; } __u; \
typeof(p) __p = (p); \
compiletime_assert_atomic_type(*p); \
kasan_check_read(__p, sizeof(*p)); \
@@ -136,33 +136,33 @@ do { \
: "Q" (*__p) : "memory"); \
break; \
} \
- __u.__val; \
+ (typeof(*p))__u.__val; \
})
#define smp_cond_load_relaxed(ptr, cond_expr) \
({ \
typeof(ptr) __PTR = (ptr); \
- typeof(*ptr) VAL; \
+ __unqual_scalar_typeof(*ptr) VAL; \
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
break; \
__cmpwait_relaxed(__PTR, VAL); \
} \
- VAL; \
+ (typeof(*ptr))VAL; \
})
#define smp_cond_load_acquire(ptr, cond_expr) \
({ \
typeof(ptr) __PTR = (ptr); \
- typeof(*ptr) VAL; \
+ __unqual_scalar_typeof(*ptr) VAL; \
for (;;) { \
VAL = smp_load_acquire(__PTR); \
if (cond_expr) \
break; \
__cmpwait_relaxed(__PTR, VAL); \
} \
- VAL; \
+ (typeof(*ptr))VAL; \
})
#include <asm-generic/barrier.h>
Powered by blists - more mailing lists