lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20210220120013.59c07876@oasis.local.home>
Date:   Sat, 20 Feb 2021 12:00:13 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     LKML <linux-kernel@...r.kernel.org>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        "Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: [for-next][PATCH] kprobes: Fix to delay the kprobes jump
 optimization


Masami Hiramatsu (1):
      kprobes: Fix to delay the kprobes jump optimization

----
 kernel/kprobes.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)
---------------------------
commit c85c9a2c6e368dc94907e63babb18a9788e5c9b6
Author: Masami Hiramatsu <mhiramat@...nel.org>
Date:   Thu Feb 18 23:29:23 2021 +0900

    kprobes: Fix to delay the kprobes jump optimization
    
    Commit 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
    moved the kprobe setup in early_initcall(), which includes kprobe
    jump optimization.
    The kprobes jump optimizer involves synchronize_rcu_tasks() which
    depends on the ksoftirqd and rcu_spawn_tasks_*(). However, since
    those are setup in core_initcall(), kprobes jump optimizer can not
    run at the early_initcall().
    
    To avoid this issue, make the kprobe optimization disabled in the
    early_initcall() and enables it in subsys_initcall().
    
    Note that non-optimized kprobes is still available after
    early_initcall(). Only jump optimization is delayed.
    
    Link: https://lkml.kernel.org/r/161365856280.719838.12423085451287256713.stgit@devnote2
    
    Fixes: 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
    Cc: Ingo Molnar <mingo@...nel.org>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Thomas Gleixner <tglx@...utronix.de>
    Cc: RCU <rcu@...r.kernel.org>
    Cc: Michael Ellerman <mpe@...erman.id.au>
    Cc: Andrew Morton <akpm@...ux-foundation.org>
    Cc: Daniel Axtens <dja@...ens.net>
    Cc: Frederic Weisbecker <frederic@...nel.org>
    Cc: Neeraj Upadhyay <neeraju@...eaurora.org>
    Cc: Joel Fernandes <joel@...lfernandes.org>
    Cc: Michal Hocko <mhocko@...e.com>
    Cc: "Theodore Y . Ts'o" <tytso@....edu>
    Cc: Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
    Cc: stable@...r.kernel.org
    Reported-by: Paul E. McKenney <paulmck@...nel.org>
    Reported-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
    Reported-by: Uladzislau Rezki <urezki@...il.com>
    Acked-by: Paul E. McKenney <paulmck@...nel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index dd1d027455c4..745f08fdd7a6 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -861,7 +861,6 @@ static void try_to_optimize_kprobe(struct kprobe *p)
 	cpus_read_unlock();
 }
 
-#ifdef CONFIG_SYSCTL
 static void optimize_all_kprobes(void)
 {
 	struct hlist_head *head;
@@ -887,6 +886,7 @@ static void optimize_all_kprobes(void)
 	mutex_unlock(&kprobe_mutex);
 }
 
+#ifdef CONFIG_SYSCTL
 static void unoptimize_all_kprobes(void)
 {
 	struct hlist_head *head;
@@ -2500,18 +2500,14 @@ static int __init init_kprobes(void)
 		}
 	}
 
-#if defined(CONFIG_OPTPROBES)
-#if defined(__ARCH_WANT_KPROBES_INSN_SLOT)
-	/* Init kprobe_optinsn_slots */
-	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
-#endif
-	/* By default, kprobes can be optimized */
-	kprobes_allow_optimization = true;
-#endif
-
 	/* By default, kprobes are armed */
 	kprobes_all_disarmed = false;
 
+#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT)
+	/* Init kprobe_optinsn_slots for allocation */
+	kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
+#endif
+
 	err = arch_init_kprobes();
 	if (!err)
 		err = register_die_notifier(&kprobe_exceptions_nb);
@@ -2526,6 +2522,21 @@ static int __init init_kprobes(void)
 }
 early_initcall(init_kprobes);
 
+#if defined(CONFIG_OPTPROBES)
+static int __init init_optprobes(void)
+{
+	/*
+	 * Enable kprobe optimization - this kicks the optimizer which
+	 * depends on synchronize_rcu_tasks() and ksoftirqd, that is
+	 * not spawned in early initcall. So delay the optimization.
+	 */
+	optimize_all_kprobes();
+
+	return 0;
+}
+subsys_initcall(init_optprobes);
+#endif
+
 #ifdef CONFIG_DEBUG_FS
 static void report_probe(struct seq_file *pi, struct kprobe *p,
 		const char *sym, int offset, char *modname, struct kprobe *pp)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ