[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20211214220439.2236564-15-paulmck@kernel.org>
Date: Tue, 14 Dec 2021 14:04:25 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com,
kernel-team@...com, mingo@...nel.org
Cc: elver@...gle.com, andreyknvl@...gle.com, glider@...gle.com,
dvyukov@...gle.com, cai@....pw, boqun.feng@...il.com,
"Paul E . McKenney" <paulmck@...nel.org>
Subject: [PATCH kcsan 15/29] locking/barriers, kcsan: Support generic instrumentation
From: Marco Elver <elver@...gle.com>
Thus far only smp_*() barriers had been defined by asm-generic/barrier.h
based on __smp_*() barriers, because the !SMP case is usually generic.
With the introduction of instrumentation, it also makes sense to have
asm-generic/barrier.h assist in the definition of instrumented versions
of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb().
Because there is no requirement to distinguish the !SMP case, the
definition can be simpler: we can avoid also providing fallbacks for the
__ prefixed cases, and only check if `defined(__<barrier>)`, to finally
define the KCSAN-instrumented versions.
This also allows for the compiler to complain if an architecture
accidentally defines both the normal and __ prefixed variant.
Signed-off-by: Marco Elver <elver@...gle.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 27a9c9edfef66..02c4339c8eebf 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -21,6 +21,31 @@
#define nop() asm volatile ("nop")
#endif
+/*
+ * Architectures that want generic instrumentation can define __ prefixed
+ * variants of all barriers.
+ */
+
+#ifdef __mb
+#define mb() do { kcsan_mb(); __mb(); } while (0)
+#endif
+
+#ifdef __rmb
+#define rmb() do { kcsan_rmb(); __rmb(); } while (0)
+#endif
+
+#ifdef __wmb
+#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
+#endif
+
+#ifdef __dma_rmb
+#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
+#endif
+
+#ifdef __dma_wmb
+#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0)
+#endif
+
/*
* Force strict CPU ordering. And yes, this is required on UP too when we're
* talking to devices.
--
2.31.1.189.g2e36527f23
Powered by blists - more mailing lists