lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 13 May 2018 10:57:46 -0700
From:   tip-bot for Masami Hiramatsu <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     hpa@...or.com, ricardo.neri-calderon@...ux.intel.com,
        oleg@...hat.com, davem@...emloft.net, ast@...nel.org, yhs@...com,
        bp@...e.de, luto@...nel.org, tglx@...utronix.de,
        rostedt@...dmis.org, torvalds@...ux-foundation.org,
        linux-kernel@...r.kernel.org, francis.deslauriers@...icios.com,
        mingo@...nel.org, mhiramat@...nel.org
Subject: [tip:x86/urgent] kprobes/x86: Prohibit probing on exception masking
 instructions

Commit-ID:  ee6a7354a3629f9b65bc18dbe393503e9440d6f5
Gitweb:     https://git.kernel.org/tip/ee6a7354a3629f9b65bc18dbe393503e9440d6f5
Author:     Masami Hiramatsu <mhiramat@...nel.org>
AuthorDate: Wed, 9 May 2018 21:58:15 +0900
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Sun, 13 May 2018 19:52:55 +0200

kprobes/x86: Prohibit probing on exception masking instructions

Since MOV SS and POP SS instructions will delay the exceptions until the
next instruction is executed, single-stepping on it by kprobes must be
prohibited.

However, kprobes usually executes those instructions directly on trampoline
buffer (a.k.a. kprobe-booster), except for the kprobes which has
post_handler. Thus if kprobe user probes MOV SS with post_handler, it will
do single-stepping on the MOV SS.

This means it is safe that if it is used via ftrace or perf/bpf since those
don't use the post_handler.

Anyway, since the stack switching is a rare case, it is safer just
rejecting kprobes on such instructions.

Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Cc: Francis Deslauriers <francis.deslauriers@...icios.com>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Alexei Starovoitov <ast@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: "H . Peter Anvin" <hpa@...or.com>
Cc: Yonghong Song <yhs@...com>
Cc: Borislav Petkov <bp@...e.de>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: "David S . Miller" <davem@...emloft.net>
Link: https://lkml.kernel.org/r/152587069574.17316.3311695234863248641.stgit@devbox

---
 arch/x86/include/asm/insn.h    | 18 ++++++++++++++++++
 arch/x86/kernel/kprobes/core.c |  4 ++++
 2 files changed, 22 insertions(+)

diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h
index b3e32b010ab1..c2c01f84df75 100644
--- a/arch/x86/include/asm/insn.h
+++ b/arch/x86/include/asm/insn.h
@@ -208,4 +208,22 @@ static inline int insn_offset_immediate(struct insn *insn)
 	return insn_offset_displacement(insn) + insn->displacement.nbytes;
 }
 
+#define POP_SS_OPCODE 0x1f
+#define MOV_SREG_OPCODE 0x8e
+
+/*
+ * Intel SDM Vol.3A 6.8.3 states;
+ * "Any single-step trap that would be delivered following the MOV to SS
+ * instruction or POP to SS instruction (because EFLAGS.TF is 1) is
+ * suppressed."
+ * This function returns true if @insn is MOV SS or POP SS. On these
+ * instructions, single stepping is suppressed.
+ */
+static inline int insn_masking_exception(struct insn *insn)
+{
+	return insn->opcode.bytes[0] == POP_SS_OPCODE ||
+		(insn->opcode.bytes[0] == MOV_SREG_OPCODE &&
+		 X86_MODRM_REG(insn->modrm.bytes[0]) == 2);
+}
+
 #endif /* _ASM_X86_INSN_H */
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 0715f827607c..6f4d42377fe5 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -370,6 +370,10 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
 	if (insn->opcode.bytes[0] == BREAKPOINT_INSTRUCTION)
 		return 0;
 
+	/* We should not singlestep on the exception masking instructions */
+	if (insn_masking_exception(insn))
+		return 0;
+
 #ifdef CONFIG_X86_64
 	/* Only x86_64 has RIP relative instructions */
 	if (insn_rip_relative(insn)) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ