[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200204181605.GS2935@paulmck-ThinkPad-P72>
Date: Tue, 4 Feb 2020 10:16:05 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Marco Elver <elver@...gle.com>
Cc: andreyknvl@...gle.com, glider@...gle.com, dvyukov@...gle.com,
kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] kcsan: Add option to assume plain aligned writes
up to word size are atomic
On Tue, Feb 04, 2020 at 06:21:10PM +0100, Marco Elver wrote:
> This adds option KCSAN_ASSUME_PLAIN_WRITES_ATOMIC. If enabled, plain
> aligned writes up to word size are assumed to be atomic, and also not
> subject to other unsafe compiler optimizations resulting in data races.
>
> This option has been enabled by default to reflect current kernel-wide
> preferences.
>
> Signed-off-by: Marco Elver <elver@...gle.com>
Queued all three for further testing and review, thank you!
Thanx, Paul
> ---
> v2:
> * Also check for alignment of writes.
> ---
> kernel/kcsan/core.c | 22 +++++++++++++++++-----
> lib/Kconfig.kcsan | 27 ++++++++++++++++++++-------
> 2 files changed, 37 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
> index 64b30f7716a12..e3c7d8f34f2ff 100644
> --- a/kernel/kcsan/core.c
> +++ b/kernel/kcsan/core.c
> @@ -5,6 +5,7 @@
> #include <linux/delay.h>
> #include <linux/export.h>
> #include <linux/init.h>
> +#include <linux/kernel.h>
> #include <linux/percpu.h>
> #include <linux/preempt.h>
> #include <linux/random.h>
> @@ -169,10 +170,20 @@ static __always_inline struct kcsan_ctx *get_ctx(void)
> return in_task() ? ¤t->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx);
> }
>
> -static __always_inline bool is_atomic(const volatile void *ptr)
> +static __always_inline bool
> +is_atomic(const volatile void *ptr, size_t size, int type)
> {
> - struct kcsan_ctx *ctx = get_ctx();
> + struct kcsan_ctx *ctx;
> +
> + if ((type & KCSAN_ACCESS_ATOMIC) != 0)
> + return true;
>
> + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) &&
> + (type & KCSAN_ACCESS_WRITE) != 0 && size <= sizeof(long) &&
> + IS_ALIGNED((unsigned long)ptr, size))
> + return true; /* Assume aligned writes up to word size are atomic. */
> +
> + ctx = get_ctx();
> if (unlikely(ctx->atomic_next > 0)) {
> /*
> * Because we do not have separate contexts for nested
> @@ -193,7 +204,8 @@ static __always_inline bool is_atomic(const volatile void *ptr)
> return kcsan_is_atomic(ptr);
> }
>
> -static __always_inline bool should_watch(const volatile void *ptr, int type)
> +static __always_inline bool
> +should_watch(const volatile void *ptr, size_t size, int type)
> {
> /*
> * Never set up watchpoints when memory operations are atomic.
> @@ -202,7 +214,7 @@ static __always_inline bool should_watch(const volatile void *ptr, int type)
> * should not count towards skipped instructions, and (2) to actually
> * decrement kcsan_atomic_next for consecutive instruction stream.
> */
> - if ((type & KCSAN_ACCESS_ATOMIC) != 0 || is_atomic(ptr))
> + if (is_atomic(ptr, size, type))
> return false;
>
> if (this_cpu_dec_return(kcsan_skip) >= 0)
> @@ -460,7 +472,7 @@ static __always_inline void check_access(const volatile void *ptr, size_t size,
> if (unlikely(watchpoint != NULL))
> kcsan_found_watchpoint(ptr, size, type, watchpoint,
> encoded_watchpoint);
> - else if (unlikely(should_watch(ptr, type)))
> + else if (unlikely(should_watch(ptr, size, type)))
> kcsan_setup_watchpoint(ptr, size, type);
> }
>
> diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
> index 3552990abcfe5..66126853dab02 100644
> --- a/lib/Kconfig.kcsan
> +++ b/lib/Kconfig.kcsan
> @@ -91,13 +91,13 @@ config KCSAN_REPORT_ONCE_IN_MS
> limiting reporting to avoid flooding the console with reports.
> Setting this to 0 disables rate limiting.
>
> -# Note that, while some of the below options could be turned into boot
> -# parameters, to optimize for the common use-case, we avoid this because: (a)
> -# it would impact performance (and we want to avoid static branch for all
> -# {READ,WRITE}_ONCE, atomic_*, bitops, etc.), and (b) complicate the design
> -# without real benefit. The main purpose of the below options is for use in
> -# fuzzer configs to control reported data races, and they are not expected
> -# to be switched frequently by a user.
> +# The main purpose of the below options is to control reported data races (e.g.
> +# in fuzzer configs), and are not expected to be switched frequently by other
> +# users. We could turn some of them into boot parameters, but given they should
> +# not be switched normally, let's keep them here to simplify configuration.
> +#
> +# The defaults below are chosen to be very conservative, and may miss certain
> +# bugs.
>
> config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
> bool "Report races of unknown origin"
> @@ -116,6 +116,19 @@ config KCSAN_REPORT_VALUE_CHANGE_ONLY
> the data value of the memory location was observed to remain
> unchanged, do not report the data race.
>
> +config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC
> + bool "Assume that plain aligned writes up to word size are atomic"
> + default y
> + help
> + Assume that plain aligned writes up to word size are atomic by
> + default, and also not subject to other unsafe compiler optimizations
> + resulting in data races. This will cause KCSAN to not report data
> + races due to conflicts where the only plain accesses are aligned
> + writes up to word size: conflicts between marked reads and plain
> + aligned writes up to word size will not be reported as data races;
> + notice that data races between two conflicting plain aligned writes
> + will also not be reported.
> +
> config KCSAN_IGNORE_ATOMICS
> bool "Do not instrument marked atomic accesses"
> help
> --
> 2.25.0.341.g760bfbb309-goog
>
Powered by blists - more mailing lists