lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Apr 2011 09:51:23 +0900
From:	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
To:	Jiri Olsa <jolsa@...hat.com>, mingo@...e.hu
Cc:	Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kprobes,x86: disable irq durinr optimized callback

(2011/04/26 23:19), Jiri Olsa wrote:
> On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
>> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
>>> hi,
>>>
>>> attached patch is disabling irqs during optimized callback,
>>> so we dont miss any in-irq kprobes as missed.
>>>
>>> Also I think there's small window where current_kprobe variable
>>> could be touched in non-safe way, but I was not able to hit
>>> any issue.
>>>
>>> I'm not sure wether this is a bug or if it was intentional to have
>>> irqs enabled during the pre_handler callback.
>>
>> That's not very convincing. Did you see if we actually did miss events.
>> If that's the case then it is a bug. The conversion to optimizing should
>> not cause events to be missed.
> 
> yep, running following:
> 
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
> 
> makes the optimized kprobes to be missed. They are not missed in
> same testcase for non-optimized kprobes. I should have mentioned
> that, sry ;)

Good catch! that's right! kprobes' int3 automatically disables
irq, but optimized path doesn't. And that causes unexpected
event loss.


> 
>>
>>
>>>
>>> wbr,
>>> jirka
>>>
>>> ---
>>> Disabling irqs during optimized callback, so we dont miss
>>> any in-irq kprobes as missed.
>>>
>>> Interrupts are also disabled during non-optimized kprobes callbacks.
>>>
>>> Signed-off-by: Jiri Olsa <jolsa@...hat.com>
>>> ---
>>>  arch/x86/kernel/kprobes.c |    3 +++
>>>  1 files changed, 3 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
>>> index c969fd9..917cb31 100644
>>> --- a/arch/x86/kernel/kprobes.c
>>> +++ b/arch/x86/kernel/kprobes.c
>>> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>>>  					 struct pt_regs *regs)
>>>  {
>>>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>>> +	unsigned long flags;
>>>  
>>>  	/* This is possible if op is under delayed unoptimizing */
>>>  	if (kprobe_disabled(&op->kp))
>>>  		return;
>>>  
>>> +	local_irq_save(flags);
>>>  	preempt_disable();
>>
>> No reason to disable preemption if you disabled interrupts.

Right,

> ops, missed that..  attaching new patch


> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
> 
> running following:
> 
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
> 
> makes the optimized kprobes to be missed. None is missed
> if the kprobe optimatization is disabled.
> 
> Signed-off-by: Jiri Olsa <jolsa@...hat.com>

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>


Ingo, could you pull this as a bugfix?

Thank you!


> ---
>  arch/x86/kernel/kprobes.c |    5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..f1a6244 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  					 struct pt_regs *regs)
>  {
>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
>  
>  	/* This is possible if op is under delayed unoptimizing */
>  	if (kprobe_disabled(&op->kp))
>  		return;
>  
> -	preempt_disable();
> +	local_irq_save(flags);
>  	if (kprobe_running()) {
>  		kprobes_inc_nmissed_count(&op->kp);
>  	} else {
> @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  		opt_pre_handler(&op->kp, regs);
>  		__this_cpu_write(current_kprobe, NULL);
>  	}
> -	preempt_enable_no_resched();
> +	local_irq_restore(flags);
>  }
>  
>  static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@...achi.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ