[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-75013fb16f8484898eaa8d0b08fed942d790f029@git.kernel.org>
Date: Wed, 1 Mar 2017 01:13:01 -0800
From: tip-bot for Masami Hiramatsu <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: mhiramat@...nel.org, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, peterz@...radead.org, bp@...en8.de,
mingo@...nel.org, hpa@...or.com, tglx@...utronix.de
Subject: [tip:perf/urgent] kprobes/x86: Fix kernel panic when certain
exception-handling addresses are probed
Commit-ID: 75013fb16f8484898eaa8d0b08fed942d790f029
Gitweb: http://git.kernel.org/tip/75013fb16f8484898eaa8d0b08fed942d790f029
Author: Masami Hiramatsu <mhiramat@...nel.org>
AuthorDate: Wed, 1 Mar 2017 01:23:24 +0900
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 1 Mar 2017 09:56:13 +0100
kprobes/x86: Fix kernel panic when certain exception-handling addresses are probed
Fix to the exception table entry check by using probed address
instead of the address of copied instruction.
This bug may cause unexpected kernel panic if user probe an address
where an exception can happen which should be fixup by __ex_table
(e.g. copy_from_user.)
Unless user puts a kprobe on such address, this doesn't
cause any problem.
This bug has been introduced years ago, by commit:
464846888d9a ("x86/kprobes: Fix a bug which can modify kernel code permanently").
Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Fixes: 464846888d9a ("x86/kprobes: Fix a bug which can modify kernel code permanently")
Link: http://lkml.kernel.org/r/148829899399.28855.12581062400757221722.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/kernel/kprobes/common.h | 2 +-
arch/x86/kernel/kprobes/core.c | 6 +++---
arch/x86/kernel/kprobes/opt.c | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
index c6ee63f..d688826 100644
--- a/arch/x86/kernel/kprobes/common.h
+++ b/arch/x86/kernel/kprobes/common.h
@@ -67,7 +67,7 @@
#endif
/* Ensure if the instruction can be boostable */
-extern int can_boost(kprobe_opcode_t *instruction);
+extern int can_boost(kprobe_opcode_t *instruction, void *addr);
/* Recover instruction if given address is probed */
extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf,
unsigned long addr);
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 520b8df..88b3c94 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -166,12 +166,12 @@ NOKPROBE_SYMBOL(skip_prefixes);
* Returns non-zero if opcode is boostable.
* RIP relative instructions are adjusted at copying time in 64 bits mode
*/
-int can_boost(kprobe_opcode_t *opcodes)
+int can_boost(kprobe_opcode_t *opcodes, void *addr)
{
kprobe_opcode_t opcode;
kprobe_opcode_t *orig_opcodes = opcodes;
- if (search_exception_tables((unsigned long)opcodes))
+ if (search_exception_tables((unsigned long)addr))
return 0; /* Page fault may occur on this address. */
retry:
@@ -416,7 +416,7 @@ static int arch_copy_kprobe(struct kprobe *p)
* __copy_instruction can modify the displacement of the instruction,
* but it doesn't affect boostable check.
*/
- if (can_boost(p->ainsn.insn))
+ if (can_boost(p->ainsn.insn, p->addr))
p->ainsn.boostable = 0;
else
p->ainsn.boostable = -1;
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 3d1bee9..3e7c6e5 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -178,7 +178,7 @@ static int copy_optimized_instructions(u8 *dest, u8 *src)
while (len < RELATIVEJUMP_SIZE) {
ret = __copy_instruction(dest + len, src + len);
- if (!ret || !can_boost(dest + len))
+ if (!ret || !can_boost(dest + len, src + len))
return -EINVAL;
len += ret;
}
Powered by blists - more mailing lists