[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191022141103.GE11583@lakrids.cambridge.arm.com>
Date: Tue, 22 Oct 2019 15:11:04 +0100
From: Mark Rutland <mark.rutland@....com>
To: Marco Elver <elver@...gle.com>
Cc: akiyks@...il.com, stern@...land.harvard.edu, glider@...gle.com,
parri.andrea@...il.com, andreyknvl@...gle.com, luto@...nel.org,
ard.biesheuvel@...aro.org, arnd@...db.de, boqun.feng@...il.com,
bp@...en8.de, dja@...ens.net, dlustig@...dia.com,
dave.hansen@...ux.intel.com, dhowells@...hat.com,
dvyukov@...gle.com, hpa@...or.com, mingo@...hat.com,
j.alglave@....ac.uk, joel@...lfernandes.org, corbet@....net,
jpoimboe@...hat.com, luc.maranget@...ia.fr, npiggin@...il.com,
paulmck@...ux.ibm.com, peterz@...radead.org, tglx@...utronix.de,
will@...nel.org, kasan-dev@...glegroups.com,
linux-arch@...r.kernel.org, linux-doc@...r.kernel.org,
linux-efi@...r.kernel.org, linux-kbuild@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Subject: Re: [PATCH v2 1/8] kcsan: Add Kernel Concurrency Sanitizer
infrastructure
Hi Marco,
On Thu, Oct 17, 2019 at 04:12:58PM +0200, Marco Elver wrote:
> Kernel Concurrency Sanitizer (KCSAN) is a dynamic data-race detector for
> kernel space. KCSAN is a sampling watchpoint-based data-race detector.
> See the included Documentation/dev-tools/kcsan.rst for more details.
>
> This patch adds basic infrastructure, but does not yet enable KCSAN for
> any architecture.
>
> Signed-off-by: Marco Elver <elver@...gle.com>
> ---
> v2:
> * Elaborate comment about instrumentation calls emitted by compilers.
> * Replace kcsan_check_access(.., {true, false}) with
> kcsan_check_{read,write} for improved readability.
> * Change bug title of race of unknown origin to just say "data-race in".
> * Refine "Key Properties" in kcsan.rst, and mention observed slow-down.
> * Add comment about safety of find_watchpoint without user_access_save.
> * Remove unnecessary preempt_disable/enable and elaborate on comment why
> we want to disable interrupts and preemptions.
> * Use common struct kcsan_ctx in task_struct and for per-CPU interrupt
> contexts [Suggested by Mark Rutland].
This is generally looking good to me.
I have a few comments below. Those are mostly style and naming things to
minimize surprise, though I also have a couple of queries (nested vs
flat atomic regions and the number of watchpoints).
[...]
> diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h
> new file mode 100644
> index 000000000000..fd5de2ba3a16
> --- /dev/null
> +++ b/include/linux/kcsan.h
> @@ -0,0 +1,108 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _LINUX_KCSAN_H
> +#define _LINUX_KCSAN_H
> +
> +#include <linux/types.h>
> +#include <linux/kcsan-checks.h>
> +
> +#ifdef CONFIG_KCSAN
> +
> +/*
> + * Context for each thread of execution: for tasks, this is stored in
> + * task_struct, and interrupts access internal per-CPU storage.
> + */
> +struct kcsan_ctx {
> + int disable; /* disable counter */
Can we call this disable_count? That would match the convention used for
preempt_count, and make it clear this isn't a boolean.
> + int atomic_next; /* number of following atomic ops */
I'm a little unclear on why we need this given the begin ... end
helpers -- isn't knowing that we're in an atomic region sufficient?
> +
> + /*
> + * We use separate variables to store if we are in a nestable or flat
> + * atomic region. This helps make sure that an atomic region with
> + * nesting support is not suddenly aborted when a flat region is
> + * contained within. Effectively this allows supporting nesting flat
> + * atomic regions within an outer nestable atomic region. Support for
> + * this is required as there are cases where a seqlock reader critical
> + * section (flat atomic region) is contained within a seqlock writer
> + * critical section (nestable atomic region), and the "mismatching
> + * kcsan_end_atomic()" warning would trigger otherwise.
> + */
> + int atomic_region;
> + bool atomic_region_flat;
> +};
I think we need to introduce nestability and flatness first. How about:
/*
* Some atomic sequences are flat, and cannot contain another
* atomic sequence. Other atomic sequences are nestable, and may
* contain other flat and/or nestable sequences.
*
* For example, a seqlock writer critical section is nestable
* and may contain a seqlock reader critical section, which is
* flat.
*
* To support this we track the depth of nesting, and whether
* the leaf level is flat.
*/
int atomic_nest_count;
bool in_flat_atomic;
That said, I'm not entirely clear on the distinction. Why would nesting
a reader within another reader not be legitimate?
> +
> +/**
> + * kcsan_init - initialize KCSAN runtime
> + */
> +void kcsan_init(void);
> +
> +/**
> + * kcsan_disable_current - disable KCSAN for the current context
> + *
> + * Supports nesting.
> + */
> +void kcsan_disable_current(void);
> +
> +/**
> + * kcsan_enable_current - re-enable KCSAN for the current context
> + *
> + * Supports nesting.
> + */
> +void kcsan_enable_current(void);
> +
> +/**
> + * kcsan_begin_atomic - use to denote an atomic region
> + *
> + * Accesses within the atomic region may appear to race with other accesses but
> + * should be considered atomic.
> + *
> + * @nest true if regions may be nested, or false for flat region
> + */
> +void kcsan_begin_atomic(bool nest);
> +
> +/**
> + * kcsan_end_atomic - end atomic region
> + *
> + * @nest must match argument to kcsan_begin_atomic().
> + */
> +void kcsan_end_atomic(bool nest);
> +
Similarly to the check_{read,write}() naming, could we get rid of the
bool argument and split this into separate nestable and flat functions?
That makes it easier to read in-context, e.g.
kcsan_nestable_atomic_begin();
...
kcsan_nestable_atomic_end();
... has a more obvious meaning than:
kcsan_begin_atomic(true);
...
kcsan_end_atomic(true);
... and putting the begin/end at the end of the name makes it easier to
spot the matching pair.
[...]
> +static inline bool is_enabled(void)
> +{
> + return READ_ONCE(kcsan_enabled) && get_ctx()->disable == 0;
> +}
Can we please make this kcsan_is_enabled(), to avoid confusion with
IS_ENABLED()?
> +static inline unsigned int get_delay(void)
> +{
> + unsigned int max_delay = in_task() ? CONFIG_KCSAN_UDELAY_MAX_TASK :
> + CONFIG_KCSAN_UDELAY_MAX_INTERRUPT;
> + return IS_ENABLED(CONFIG_KCSAN_DELAY_RANDOMIZE) ?
> + ((prandom_u32() % max_delay) + 1) :
> + max_delay;
> +}
> +
> +/* === Public interface ===================================================== */
> +
> +void __init kcsan_init(void)
> +{
> + BUG_ON(!in_task());
> +
> + kcsan_debugfs_init();
> + kcsan_enable_current();
> +#ifdef CONFIG_KCSAN_EARLY_ENABLE
> + /*
> + * We are in the init task, and no other tasks should be running.
> + */
> + WRITE_ONCE(kcsan_enabled, true);
> +#endif
Where possible, please use IS_ENABLED() rather than ifdeffery for
portions of functions like this, e.g.
/*
* We are in the init task, and no other tasks should be running.
*/
if (IS_ENABLED(CONFIG_KCSAN_EARLY_ENABLE))
WRITE_ONCE(kcsan_enabled, true);
That makes code a bit easier to read, and ensures that the code always
gets build coverage, so it's less likely that code changes will
introduce a build failure when the option is enabled.
[...]
> +#ifdef CONFIG_KCSAN_DEBUG
> + kcsan_disable_current();
> + pr_err("KCSAN: watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n",
> + is_write ? "write" : "read", size, ptr,
> + watchpoint_slot((unsigned long)ptr),
> + encode_watchpoint((unsigned long)ptr, size, is_write));
> + kcsan_enable_current();
> +#endif
This can use IS_ENABLED(), e.g.
if (IS_ENABLED(CONFIG_KCSAN_DEBUG)) {
kcsan_disable_current();
pr_err("KCSAN: watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n",
is_write ? "write" : "read", size, ptr,
watchpoint_slot((unsigned long)ptr),
encode_watchpoint((unsigned long)ptr, size, is_write));
kcsan_enable_current();
}
[...]
> +#ifdef CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
> + kcsan_report(ptr, size, is_write, smp_processor_id(),
> + kcsan_report_race_unknown_origin);
> +#endif
This can also use IS_ENABLED().
[...]
> diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h
> new file mode 100644
> index 000000000000..429479b3041d
> --- /dev/null
> +++ b/kernel/kcsan/kcsan.h
> @@ -0,0 +1,140 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _MM_KCSAN_KCSAN_H
> +#define _MM_KCSAN_KCSAN_H
> +
> +#include <linux/kcsan.h>
> +
> +/*
> + * Total number of watchpoints. An address range maps into a specific slot as
> + * specified in `encoding.h`. Although larger number of watchpoints may not even
> + * be usable due to limited thread count, a larger value will improve
> + * performance due to reducing cache-line contention.
> + */
> +#define KCSAN_NUM_WATCHPOINTS 64
Is there any documentation as to how 64 was chosen? It's fine if it's
arbitrary, but it would be good to know either way.
I wonder if this is something that might need to scale with NR_CPUS (or
nr_cpus).
> +enum kcsan_counter_id {
> + /*
> + * Number of watchpoints currently in use.
> + */
> + kcsan_counter_used_watchpoints,
Nit: typically enum values are capitalized (as coding-style.rst says).
That helps to make it clear each value is a constant rather than a
variable. Likewise for the other enums here.
Thanks,
Mark.
Powered by blists - more mailing lists