lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Feb 2022 15:51:53 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     x86@...nel.org, joao@...rdrivepizza.com, hjl.tools@...il.com,
        jpoimboe@...hat.com, andrew.cooper3@...rix.com
Cc:     linux-kernel@...r.kernel.org, peterz@...radead.org,
        ndesaulniers@...gle.com, keescook@...omium.org,
        samitolvanen@...gle.com, mark.rutland@....com,
        alyssa.milburn@...el.com, mbenes@...e.cz, rostedt@...dmis.org,
        mhiramat@...nel.org, alexei.starovoitov@...il.com
Subject: [PATCH v2 15/39] x86/ibt,kprobes: Fix more +0 assumptions

With IBT on, sym+0 is no longer the __fentry__ site.

NOTE: the architecture has a special case and *does* allow placing an
INT3 breakpoint over ENDBR in which case #BP has precedence over #CP
and as such we don't need to disallow probing these instructions.

NOTE: irrespective of the above; there is a complication in that
direct branches to functions are rewritten to not execute ENDBR, so
any breakpoint thereon might miss lots of actual function executions.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 arch/x86/kernel/kprobes/core.c |   11 +++++++++++
 kernel/kprobes.c               |   15 ++++++++++++---
 2 files changed, 23 insertions(+), 3 deletions(-)

--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -1156,3 +1162,8 @@ int arch_trampoline_kprobe(struct kprobe
 {
 	return 0;
 }
+
+bool arch_kprobe_on_func_entry(unsigned long offset)
+{
+	return offset <= 4*HAS_KERNEL_IBT;
+}
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -67,10 +67,19 @@ static bool kprobes_all_disarmed;
 static DEFINE_MUTEX(kprobe_mutex);
 static DEFINE_PER_CPU(struct kprobe *, kprobe_instance);
 
-kprobe_opcode_t * __weak kprobe_lookup_name(const char *name,
-					unsigned int __unused)
+kprobe_opcode_t * __weak kprobe_lookup_name(const char *name, unsigned int offset)
 {
-	return ((kprobe_opcode_t *)(kallsyms_lookup_name(name)));
+	kprobe_opcode_t *addr = NULL;
+
+	addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name)));
+#ifdef CONFIG_KPROBES_ON_FTRACE
+	if (addr && !offset) {
+		unsigned long faddr = ftrace_location((unsigned long)addr);
+		if (faddr)
+			addr = (kprobe_opcode_t *)faddr;
+	}
+#endif
+	return addr;
 }
 
 /*


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ