[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110426134625.GA21840@home.goodmis.org>
Date: Tue, 26 Apr 2011 09:46:25 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Jiri Olsa <jolsa@...hat.com>
Cc: masami.hiramatsu.pt@...achi.com, linux-kernel@...r.kernel.org,
mingo@...e.hu
Subject: Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> hi,
>
> attached patch is disabling irqs during optimized callback,
> so we dont miss any in-irq kprobes as missed.
>
> Also I think there's small window where current_kprobe variable
> could be touched in non-safe way, but I was not able to hit
> any issue.
>
> I'm not sure wether this is a bug or if it was intentional to have
> irqs enabled during the pre_handler callback.
That's not very convincing. Did you see if we actually did miss events.
If that's the case then it is a bug. The conversion to optimizing should
not cause events to be missed.
>
> wbr,
> jirka
>
> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
>
> Interrupts are also disabled during non-optimized kprobes callbacks.
>
> Signed-off-by: Jiri Olsa <jolsa@...hat.com>
> ---
> arch/x86/kernel/kprobes.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..917cb31 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> struct pt_regs *regs)
> {
> struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> + unsigned long flags;
>
> /* This is possible if op is under delayed unoptimizing */
> if (kprobe_disabled(&op->kp))
> return;
>
> + local_irq_save(flags);
> preempt_disable();
No reason to disable preemption if you disabled interrupts.
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(&op->kp);
> @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> __this_cpu_write(current_kprobe, NULL);
> }
> preempt_enable_no_resched();
Remove the preempt_enable_no_resched() as well.
BTW, what's up with all these preempt_enable_no_resched()'s laying
around in the kprobe code? Looks to me that this can cause lots of
missing wakeups (preemption leaks). Which would make this horrible for
real-time.
-- Steve
> + local_irq_restore(flags);
> }
>
> static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists