[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <150140976438.26851.12137555127116048085.stgit@devbox>
Date: Sun, 30 Jul 2017 19:16:14 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Francis Deslauriers <francis.deslauriers@...icios.com>,
mathieu.desnoyers@...icios.com, Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Masami Hiramatsu <mhiramat@...nel.org>,
Ananth N Mavinakayanahalli <ananth@...ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
"David S . Miller" <davem@...emloft.net>,
linux-kernel@...r.kernel.org,
Yoshinori Sato <ysato@...rs.sourceforge.jp>,
Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>
Subject: [PATCH -tip v8 4/4] [BUGFIX] kprobes/x86: Do not jump-optimize kprobes on irq entry code
Since the kernel segment registers are not prepared at the
entry of irq-entry code, if a kprobe on such code is
jump-optimized, accessing per-cpu variables may cause
kernel panic.
However, if the kprobe is not optimized, it kicks int3
exception and set segment registers correctly.
This checks probe-address and if it is in irq-entry code,
it prohibits optimizing such kprobes. This means we can
continuously probing such interrupt handlers by kprobes
but it is not optimized anymore.
Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
Reported-by: Francis Deslauriers <francis.deslauriers@...icios.com>
Tested-by: Francis Deslauriers <francis.deslauriers@...icios.com>
---
arch/x86/kernel/kprobes/opt.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 69ea0bc1cfa3..4f98aad38237 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -39,6 +39,7 @@
#include <asm/insn.h>
#include <asm/debugreg.h>
#include <asm/set_memory.h>
+#include <asm/sections.h>
#include "common.h"
@@ -251,10 +252,12 @@ static int can_optimize(unsigned long paddr)
/*
* Do not optimize in the entry code due to the unstable
- * stack handling.
+ * stack handling and registers setup.
*/
- if ((paddr >= (unsigned long)__entry_text_start) &&
- (paddr < (unsigned long)__entry_text_end))
+ if (((paddr >= (unsigned long)__entry_text_start) &&
+ (paddr < (unsigned long)__entry_text_end)) ||
+ ((paddr >= (unsigned long)__irqentry_text_start) &&
+ (paddr < (unsigned long)__irqentry_text_end)))
return 0;
/* Check there is enough space for a relative jump. */
Powered by blists - more mailing lists