[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <149562719270.15375.4565081030740506940.stgit@devbox>
Date: Wed, 24 May 2017 21:00:03 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Ingo Molnar <mingo@...nel.org>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H . Peter Anvin" <hpa@...or.com>
Subject: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
To enable jump optimized probe with CONFIG_PREEMPT, use
synchronize_rcu_tasks() to wait for all tasks preempted
on trampoline code back on track.
Since the jump optimized kprobes can replace multiple
instructions, there can be tasks which are preempted
on the 2nd (or 3rd) instructions. If the kprobe
replaces those instructions by a jump instruction,
when those tasks back to the preempted place, it is
a middle of the jump instruction and causes a kernel
panic.
To avoid such tragedies in advance, kprobe optimizer
prepare a detour route using normal kprobe (e.g.
int3 breakpoint on x86), and wait for the tasks which
is interrrupted on such place by synchronize_sched()
when CONFIG_PREEMPT=n.
If CONFIG_PREEMPT=y, things be more complicated, because
such interrupted thread can be preempted (other thread
can be scheduled in interrupt handler.) So, kprobes
optimizer has to wait for those tasks scheduled normally.
In this case we can use synchronize_rcu_tasks() which
ensures that all preempted tasks back on track and
schedule it.
Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
---
arch/Kconfig | 2 +-
kernel/kprobes.c | 23 ++++++++++++++++++++++-
2 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 6c00e5b..2abb8de 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST
config OPTPROBES
def_bool y
depends on KPROBES && HAVE_OPTPROBES
- depends on !PREEMPT
+ select TASKS_RCU if PREEMPT
config KPROBES_ON_FTRACE
def_bool y
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 9f60567..6d69074 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
static bool kprobes_allow_optimization;
/*
+ * Synchronizing wait on trampline code for interrupted tasks/threads.
+ * Since the threads running on dynamically allocated trampline code
+ * can be interrupted, kprobes has to wait for those tasks back on
+ * track and scheduled. If the kernel is preemptive, the thread can be
+ * preempted by other tasks on the trampoline too. For such case, this
+ * calls synchronize_rcu_tasks() to wait for those tasks back on track.
+ */
+static void synchronize_on_trampoline(void)
+{
+#ifdef CONFIG_PREEMPT
+ synchronize_rcu_tasks();
+#else
+ synchronize_sched();
+#endif
+}
+
+/*
* Call all pre_handler on the list, but ignores its return value.
* This must be called from arch-dep optimized caller.
*/
@@ -578,8 +595,12 @@ static void kprobe_optimizer(struct work_struct *work)
* there is a chance that Nth instruction is interrupted. In that
* case, running interrupt can return to 2nd-Nth byte of jump
* instruction. This wait is for avoiding it.
+ * With CONFIG_PREEMPT, the interrupts can leads preemption. To wait
+ * for such thread, we will use synchronize_rcu_tasks() which ensures
+ * all preeempted tasks are scheduled normally. So we can ensure there
+ * is no threads running there.
*/
- synchronize_sched();
+ synchronize_on_trampoline();
/* Step 3: Optimize kprobes after quiesence period */
do_optimize_kprobes();
Powered by blists - more mailing lists