lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 01 Jul 2011 22:14:08 +0900
From:	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
To:	Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	yrl.pp-manager.tt@...achi.com,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
Subject: [RFC PATCH -tip ] [BUGFIX] x86: Remove preempt disabling from kprobes

Steven Rostedt reported that putting kprobe on the instruction which
loads preempt_count causes the wrong result, as below.

Kprobes requires preemption to be disabled as it single steps the code
it replaced with a breakpoint. But because the code that is single
stepped could be reading the preempt count, the kprobe disabling of the
preempt count can cause the wrong value to end up as a result. Here's an
example:

If we add a kprobe on a inc_preempt_count() call:

	[ preempt_count = 0 ]

	ld preempt_count, %eax	<<--- trap

		<trap>
		preempt_disable();
		[ preempt_count = 1]
		setup_singlestep();
		<trap return>

	[ preempt_count = 1 ]

	ld preempt_count, %eax

	[ %eax = 1 ]

		<trap>
		post_kprobe_handler()
			preempt_enable_no_resched();
			[ preempt_count = 0 ]
		<trap return>

	[ %eax = 1 ]

	add %eax,1

	[ %eax = 2 ]

	st %eax, preempt_count

	[ preempt_count = 2 ]


We just caused preempt count to increment twice when it should have only
incremented once, and this screws everything else up.

To solve this, I've removed preempt disabling code from kprobes,
since the breakpoint exception and kprobes single step routine
disables interrupts, it doesn't need to disable preemption while
single-stepping anymore.

This patch is for -tip tree, and it can be applied to linus tree too.

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
Reported-by: Steven Rostedt <rostedt@...dmis.org>
---

 arch/x86/kernel/kprobes.c |   18 ++++++------------
 1 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index f1a6244..5de9cfb 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -475,7 +475,6 @@ static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
 		 * stepping.
 		 */
 		regs->ip = (unsigned long)p->ainsn.insn;
-		preempt_enable_no_resched();
 		return;
 	}
 #endif
@@ -542,12 +541,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 
 	addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
 	/*
-	 * We don't want to be preempted for the entire
-	 * duration of kprobe processing. We conditionally
-	 * re-enable preemption at the end of this function,
-	 * and also in reenter_kprobe() and setup_singlestep().
+	 * We don't want to be preempted for the entire duration of kprobe
+	 * processing. This should be done by disabling interrupts instead
+	 * of incrementing preempt_count, because if the probed instruction
+	 * reads the count, it will get a wrong value.
+	 * Disabling irq is done by breakpoint exception.
 	 */
-	preempt_disable();
+	BUG_ON(!irqs_disabled());
 
 	kcb = get_kprobe_ctlblk();
 	p = get_kprobe(addr);
@@ -583,7 +583,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 		 * the original instruction.
 		 */
 		regs->ip = (unsigned long)addr;
-		preempt_enable_no_resched();
 		return 1;
 	} else if (kprobe_running()) {
 		p = __this_cpu_read(current_kprobe);
@@ -593,7 +592,6 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 		}
 	} /* else: not a kprobe fault; let the kernel handle it */
 
-	preempt_enable_no_resched();
 	return 0;
 }
 
@@ -917,7 +915,6 @@ static int __kprobes post_kprobe_handler(struct pt_regs *regs)
 	}
 	reset_current_kprobe();
 out:
-	preempt_enable_no_resched();
 
 	/*
 	 * if somebody else is singlestepping across a probe point, flags
@@ -951,7 +948,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr)
 			restore_previous_kprobe(kcb);
 		else
 			reset_current_kprobe();
-		preempt_enable_no_resched();
 		break;
 	case KPROBE_HIT_ACTIVE:
 	case KPROBE_HIT_SSDONE:
@@ -1098,7 +1094,6 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
 		memcpy((kprobe_opcode_t *)(kcb->jprobe_saved_sp),
 		       kcb->jprobes_stack,
 		       MIN_STACK_SIZE(kcb->jprobe_saved_sp));
-		preempt_enable_no_resched();
 		return 1;
 	}
 	return 0;
@@ -1530,7 +1525,6 @@ static int  __kprobes setup_detour_execution(struct kprobe *p,
 		regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
 		if (!reenter)
 			reset_current_kprobe();
-		preempt_enable_no_resched();
 		return 1;
 	}
 	return 0;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ