[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200513111447.GE3001@hirez.programming.kicks-ass.net>
Date: Wed, 13 May 2020 13:14:47 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Marco Elver <elver@...gle.com>
Cc: Will Deacon <will@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E. McKenney" <paulmck@...nel.org>,
Ingo Molnar <mingo@...nel.org>, dvyukov@...gle.com
Subject: Re: [PATCH v5 00/18] Rework READ_ONCE() to improve codegen
On Wed, May 13, 2020 at 01:10:57PM +0200, Peter Zijlstra wrote:
> So then I end up with something like the below, and I've validated that
> does not generate instrumentation... HOWEVER, I now need ~10g of memory
> and many seconds to compile each file in arch/x86/kernel/.
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 3bb962959d8b..48f85d1d2db6 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -241,7 +241,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> * atomicity or dependency ordering guarantees. Note that this may result
> * in tears!
> */
> -#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
> +#define __READ_ONCE(x) data_race((*(const volatile __unqual_scalar_typeof(x) *)&(x)))
>
> #define __READ_ONCE_SCALAR(x) \
> ({ \
> @@ -260,7 +260,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
>
> #define __WRITE_ONCE(x, val) \
> do { \
> - *(volatile typeof(x) *)&(x) = (val); \
> + data_race(*(volatile typeof(x) *)&(x) = (val)); \
> } while (0)
>
> #define __WRITE_ONCE_SCALAR(x, val) \
The above is responsible for that, the below variant is _MUCH_ better
again. It really doesn't like nested data_race(), as in _REALLY_ doesn't
like.
---
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 3bb962959d8b..2ea532b19e75 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -241,12 +241,12 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
* atomicity or dependency ordering guarantees. Note that this may result
* in tears!
*/
-#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
+#define __READ_ONCE(x) data_race((*(const volatile __unqual_scalar_typeof(x) *)&(x)))
#define __READ_ONCE_SCALAR(x) \
({ \
typeof(x) *__xp = &(x); \
- __unqual_scalar_typeof(x) __x = data_race(__READ_ONCE(*__xp)); \
+ __unqual_scalar_typeof(x) __x = __READ_ONCE(*__xp); \
kcsan_check_atomic_read(__xp, sizeof(*__xp)); \
smp_read_barrier_depends(); \
(typeof(x))__x; \
@@ -260,14 +260,14 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
#define __WRITE_ONCE(x, val) \
do { \
- *(volatile typeof(x) *)&(x) = (val); \
+ data_race(*(volatile typeof(x) *)&(x) = (val)); \
} while (0)
#define __WRITE_ONCE_SCALAR(x, val) \
do { \
typeof(x) *__xp = &(x); \
kcsan_check_atomic_write(__xp, sizeof(*__xp)); \
- data_race(({ __WRITE_ONCE(*__xp, val); 0; })); \
+ __WRITE_ONCE(*__xp, val); \
} while (0)
#define WRITE_ONCE(x, val) \
Powered by blists - more mailing lists