lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Apr 2011 16:19:03 +0200
From:	Jiri Olsa <jolsa@...hat.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	masami.hiramatsu.pt@...achi.com, linux-kernel@...r.kernel.org,
	mingo@...e.hu
Subject: Re: [PATCH] kprobes,x86: disable irq durinr optimized callback

On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> > hi,
> > 
> > attached patch is disabling irqs during optimized callback,
> > so we dont miss any in-irq kprobes as missed.
> > 
> > Also I think there's small window where current_kprobe variable
> > could be touched in non-safe way, but I was not able to hit
> > any issue.
> > 
> > I'm not sure wether this is a bug or if it was intentional to have
> > irqs enabled during the pre_handler callback.
> 
> That's not very convincing. Did you see if we actually did miss events.
> If that's the case then it is a bug. The conversion to optimizing should
> not cause events to be missed.

yep, running following:

# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable

makes the optimized kprobes to be missed. They are not missed in
same testcase for non-optimized kprobes. I should have mentioned
that, sry ;)

> 
> 
> > 
> > wbr,
> > jirka
> > 
> > ---
> > Disabling irqs during optimized callback, so we dont miss
> > any in-irq kprobes as missed.
> > 
> > Interrupts are also disabled during non-optimized kprobes callbacks.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@...hat.com>
> > ---
> >  arch/x86/kernel/kprobes.c |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> > index c969fd9..917cb31 100644
> > --- a/arch/x86/kernel/kprobes.c
> > +++ b/arch/x86/kernel/kprobes.c
> > @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> >  					 struct pt_regs *regs)
> >  {
> >  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +	unsigned long flags;
> >  
> >  	/* This is possible if op is under delayed unoptimizing */
> >  	if (kprobe_disabled(&op->kp))
> >  		return;
> >  
> > +	local_irq_save(flags);
> >  	preempt_disable();
> 
> No reason to disable preemption if you disabled interrupts.

ops, missed that..  attaching new patch

thanks,
jirka


---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.

running following:

# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable

makes the optimized kprobes to be missed. None is missed
if the kprobe optimatization is disabled.

Signed-off-by: Jiri Olsa <jolsa@...hat.com>
---
 arch/x86/kernel/kprobes.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 					 struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	preempt_disable();
+	local_irq_save(flags);
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
-	preempt_enable_no_resched();
+	local_irq_restore(flags);
 }
 
 static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ