[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200512183839.2373-1-elver@google.com>
Date: Tue, 12 May 2020 20:38:39 +0200
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com
Cc: linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com,
Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH] READ_ONCE, WRITE_ONCE, kcsan: Perform checks in __*_ONCE variants
If left plain, using __READ_ONCE and __WRITE_ONCE will result in many
false positives with KCSAN due to being instrumented normally. To fix,
we should move the kcsan_check and data_race into __*_ONCE.
Cc: Will Deacon <will@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Paul E. McKenney <paulmck@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Marco Elver <elver@...gle.com>
---
A proposal to fix the problem with __READ_ONCE/__WRITE_ONCE and KCSAN
false positives.
Will, please feel free to take this patch and fiddle with it until it
looks like what you want if this is completely off.
Note: Currently __WRITE_ONCE_SCALAR seems to serve no real purpose. Do
we still need it?
---
include/linux/compiler.h | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 741c93c62ecf..e902ca5de811 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -224,13 +224,16 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
* atomicity or dependency ordering guarantees. Note that this may result
* in tears!
*/
-#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
+#define __READ_ONCE(x) \
+({ \
+ kcsan_check_atomic_read(&(x), sizeof(x)); \
+ data_race((*(const volatile __unqual_scalar_typeof(x) *)&(x))); \
+})
#define __READ_ONCE_SCALAR(x) \
({ \
typeof(x) *__xp = &(x); \
- __unqual_scalar_typeof(x) __x = data_race(__READ_ONCE(*__xp)); \
- kcsan_check_atomic_read(__xp, sizeof(*__xp)); \
+ __unqual_scalar_typeof(x) __x = __READ_ONCE(*__xp); \
smp_read_barrier_depends(); \
(typeof(x))__x; \
})
@@ -243,14 +246,14 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
#define __WRITE_ONCE(x, val) \
do { \
- *(volatile typeof(x) *)&(x) = (val); \
+ kcsan_check_atomic_write(&(x), sizeof(x)); \
+ data_race(*(volatile typeof(x) *)&(x) = (val)); \
} while (0)
#define __WRITE_ONCE_SCALAR(x, val) \
do { \
typeof(x) *__xp = &(x); \
- kcsan_check_atomic_write(__xp, sizeof(*__xp)); \
- data_race(({ __WRITE_ONCE(*__xp, val); 0; })); \
+ __WRITE_ONCE(*__xp, val); \
} while (0)
#define WRITE_ONCE(x, val) \
--
2.26.2.645.ge9eca65c58-goog
Powered by blists - more mailing lists