[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201123132300.1759342-1-elver@google.com>
Date: Mon, 23 Nov 2020 14:23:00 +0100
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com, paulmck@...nel.org
Cc: will@...nel.org, peterz@...radead.org, tglx@...utronix.de,
mingo@...nel.org, mark.rutland@....com, boqun.feng@...il.com,
dvyukov@...gle.com, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org
Subject: [PATCH v2] kcsan: Avoid scheduler recursion by using non-instrumented preempt_{disable,enable}()
When enabling KCSAN for kernel/sched (remove KCSAN_SANITIZE := n from
kernel/sched/Makefile), with CONFIG_DEBUG_PREEMPT=y, we can observe
recursion due to:
check_access() [via instrumentation]
kcsan_setup_watchpoint()
reset_kcsan_skip()
kcsan_prandom_u32_max()
get_cpu_var()
preempt_disable()
preempt_count_add() [in kernel/sched/core.c]
check_access() [via instrumentation]
Avoid this by rewriting kcsan_prandom_u32_max() to only use safe
versions of preempt_disable() and preempt_enable() that do not call into
scheduler code.
Note, while this currently does not affect an unmodified kernel, it'd be
good to keep a KCSAN kernel working when KCSAN_SANITIZE := n is removed
from kernel/sched/Makefile to permit testing scheduler code with KCSAN
if desired.
Fixes: cd290ec24633 ("kcsan: Use tracing-safe version of prandom")
Signed-off-by: Marco Elver <elver@...gle.com>
---
v2:
* Update comment to also point out preempt_enable().
---
kernel/kcsan/core.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 3994a217bde7..10513f3e2349 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -284,10 +284,19 @@ should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *
*/
static u32 kcsan_prandom_u32_max(u32 ep_ro)
{
- struct rnd_state *state = &get_cpu_var(kcsan_rand_state);
- const u32 res = prandom_u32_state(state);
+ struct rnd_state *state;
+ u32 res;
+
+ /*
+ * Avoid recursion with scheduler by using non-tracing versions of
+ * preempt_disable() and preempt_enable() that do not call into
+ * scheduler code.
+ */
+ preempt_disable_notrace();
+ state = raw_cpu_ptr(&kcsan_rand_state);
+ res = prandom_u32_state(state);
+ preempt_enable_no_resched_notrace();
- put_cpu_var(kcsan_rand_state);
return (u32)(((u64) res * ep_ro) >> 32);
}
--
2.29.2.454.gaff20da3a2-goog
Powered by blists - more mailing lists