lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Nov 2017 22:26:36 +0000
From:   alexander.levin@...izon.com
To:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>
Cc:     Masami Hiramatsu <mhiramat@...nel.org>,
        Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        "Naveen N . Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
        "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Steven Rostedt" <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Ingo Molnar" <mingo@...nel.org>, alexander.levin@...izon.com
Subject: [PATCH AUTOSEL for 4.4 10/16] kprobes: Use synchronize_rcu_tasks()
 for optprobe with CONFIG_PREEMPT=y

From: Masami Hiramatsu <mhiramat@...nel.org>

[ Upstream commit a30b85df7d599f626973e9cd3056fe755bd778e0 ]

We want to wait for all potentially preempted kprobes trampoline
execution to have completed. This guarantees that any freed
trampoline memory is not in use by any task in the system anymore.
synchronize_rcu_tasks() gives such a guarantee, so use it.

Also, this guarantees to wait for all potentially preempted tasks
on the instructions which will be replaced with a jump.

Since this becomes a problem only when CONFIG_PREEMPT=y, enable
CONFIG_TASKS_RCU=y for synchronize_rcu_tasks() in that case.

Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
Acked-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Naveen N . Rao <naveen.n.rao@...ux.vnet.ibm.com>
Cc: Paul E . McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/150845661962.5443.17724352636247312231.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Sasha Levin <alexander.levin@...izon.com>
---
 arch/Kconfig     |  2 +-
 kernel/kprobes.c | 14 ++++++++------
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 4e949e58b192..4b79a9006db4 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -83,7 +83,7 @@ config STATIC_KEYS_SELFTEST
 config OPTPROBES
 	def_bool y
 	depends on KPROBES && HAVE_OPTPROBES
-	depends on !PREEMPT
+	select TASKS_RCU if PREEMPT
 
 config KPROBES_ON_FTRACE
 	def_bool y
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 695763516908..9c52c0a2ce9f 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -540,13 +540,15 @@ static void kprobe_optimizer(struct work_struct *work)
 	do_unoptimize_kprobes();
 
 	/*
-	 * Step 2: Wait for quiesence period to ensure all running interrupts
-	 * are done. Because optprobe may modify multiple instructions
-	 * there is a chance that Nth instruction is interrupted. In that
-	 * case, running interrupt can return to 2nd-Nth byte of jump
-	 * instruction. This wait is for avoiding it.
+	 * Step 2: Wait for quiesence period to ensure all potentially
+	 * preempted tasks to have normally scheduled. Because optprobe
+	 * may modify multiple instructions, there is a chance that Nth
+	 * instruction is preempted. In that case, such tasks can return
+	 * to 2nd-Nth byte of jump instruction. This wait is for avoiding it.
+	 * Note that on non-preemptive kernel, this is transparently converted
+	 * to synchronoze_sched() to wait for all interrupts to have completed.
 	 */
-	synchronize_sched();
+	synchronize_rcu_tasks();
 
 	/* Step 3: Optimize kprobes after quiesence period */
 	do_optimize_kprobes();
-- 
2.11.0

Powered by blists - more mailing lists