[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170717004603.aeab2a32179d37a8e5dfe857@kernel.org>
Date: Mon, 17 Jul 2017 00:46:03 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Francis Deslauriers <francis.deslauriers@...icios.com>,
rostedt@...dmis.org, peterz@...radead.org,
mathieu.desnoyers@...icios.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] kprobe: Fix: add symbols to kprobe blacklist
On Sun, 16 Jul 2017 23:37:44 +0900
Masami Hiramatsu <mhiramat@...nel.org> wrote:
> So, the story what the stack said,
>
> - optimized_callback() calls get_kprobe_ctlblk(), and this_cpu_ptr(&kprobe_ctlblk) caused a page fault (in apic_timer_interrupt, does it cause any problem?)
> - and following call-chain occured
> async_page_fault -> error_entry -> trace_hardirqs_off_thunk ->
> trace_hardirqs_off_caller
> - "mov %gs:0xc400,%rdx" caused async_page_fault() again.
>
> Since trace_hardirqs_off_thunk() stores general registers on
> the stack, there are some noises.
>
> [ 114.429637] FS: 00000000021e7880(0000) GS:ffff88001fd40000(0000) knlGS:0000000000000000
>
> So, the problem seems that cpu can not access to per-cpu pages.
OK, I got the root cause of this issue. Since at the irqentry code,
segment registers are not prepared for kernel yet (e.g. interrupted
in user-mode), so we must not optimize it.
I found we had already checked that by checking __entry_text_start/end,
but that is not enough, we need irqentry_text check too.
Here I made another patch, please try it.
-----
kprobes/x86: Do not jump-optimize kprobes on irq entry code
From: Masami Hiramatsu <mhiramat@...nel.org>
Since the segment registers are not prepared for kernel
in the irq-entry code, if a kprobe on such code is
jump-optimized, accessing per-cpu variables may cause
kernel panic.
However, if the kprobe is not optimized, it kicks int3
exception and set segment registers correctly.
This checks probe-address and if it is in irq-entry code,
it prohibits optimizing such kprobes. This means we can
continuously probing such interrupt handlers by kprobes
but it is not optimized anymore.
Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
Reported-by: Francis Deslauriers <francis.deslauriers@...icios.com>
---
arch/x86/entry/entry_64.S | 2 +-
arch/x86/include/asm/unwind.h | 1 +
arch/x86/kernel/kprobes/opt.c | 4 ++--
arch/x86/kernel/unwind_frame.c | 4 ++--
4 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a9a8027..95bca8b 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -675,7 +675,7 @@ apicinterrupt3 \num trace(\sym) smp_trace(\sym)
#endif
/* Make sure APIC interrupt handlers end up in the irqentry section: */
-#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN) || defined(CONFIG_KPROBES)
# define PUSH_SECTION_IRQENTRY .pushsection .irqentry.text, "ax"
# define POP_SECTION_IRQENTRY .popsection
#else
diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
index e667649..a9896fb9 100644
--- a/arch/x86/include/asm/unwind.h
+++ b/arch/x86/include/asm/unwind.h
@@ -28,6 +28,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
bool unwind_next_frame(struct unwind_state *state);
unsigned long unwind_get_return_address(struct unwind_state *state);
+bool in_entry_code(unsigned long ip);
static inline bool unwind_done(struct unwind_state *state)
{
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 69ea0bc..a51c144 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -39,6 +39,7 @@
#include <asm/insn.h>
#include <asm/debugreg.h>
#include <asm/set_memory.h>
+#include <asm/unwind.h>
#include "common.h"
@@ -253,8 +254,7 @@ static int can_optimize(unsigned long paddr)
* Do not optimize in the entry code due to the unstable
* stack handling.
*/
- if ((paddr >= (unsigned long)__entry_text_start) &&
- (paddr < (unsigned long)__entry_text_end))
+ if (in_entry_code(paddr))
return 0;
/* Check there is enough space for a relative jump. */
diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
index b9389d7..95123ce 100644
--- a/arch/x86/kernel/unwind_frame.c
+++ b/arch/x86/kernel/unwind_frame.c
@@ -84,14 +84,14 @@ static size_t regs_size(struct pt_regs *regs)
return sizeof(*regs);
}
-static bool in_entry_code(unsigned long ip)
+bool in_entry_code(unsigned long ip)
{
char *addr = (char *)ip;
if (addr >= __entry_text_start && addr < __entry_text_end)
return true;
-#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN)
+#if defined(CONFIG_FUNCTION_GRAPH_TRACER) || defined(CONFIG_KASAN) || defined(CONFIG_KPROBES)
if (addr >= __irqentry_text_start && addr < __irqentry_text_end)
return true;
#endif
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists