[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160318111332.69e79e38@gandalf.local.home>
Date: Fri, 18 Mar 2016 11:13:32 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Joel Fernandes <agnel.joel@...il.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH] trace: irqsoff: Fix function tracing in preempt and
preemptirqsoff tracers
On Sat, 12 Mar 2016 00:13:06 -0800
Joel Fernandes <agnel.joel@...il.com> wrote:
> All functions aren't traced in critical sections where preemption is disabled
> and interrupts are enabled in preempt* tracers because current behavior is to
> check if interrupts are disabled and if they are not, then we don't trace these
> functions. So here we do the correct checks depending on tracer type and allow
> these functions to be traced.
>
> One example is when interrupts are enabled before softirq processing, with the
> patch, these functions are traced as well.
>
> <...>-2265 1d.h1 3419us : preempt_count_sub <-irq_exit
> <...>-2265 1d..1 3419us : __do_softirq <-irq_exit
> <...>-2265 1d..1 3419us : msecs_to_jiffies <-__do_softirq
> <...>-2265 1d..1 3420us : irqtime_account_irq <-__do_softirq
> <...>-2265 1d..1 3420us : __local_bh_disable_ip <-__do_softirq
> <...>-2265 1..s1 3421us : run_timer_softirq <-__do_softirq
> <...>-2265 1..s1 3421us : hrtimer_run_pending <-run_timer_softirq
> <...>-2265 1..s1 3421us : _raw_spin_lock_irq <-run_timer_softirq
> <...>-2265 1d.s1 3422us : preempt_count_add <-_raw_spin_lock_irq
> <...>-2265 1d.s2 3422us : _raw_spin_unlock_irq <-run_timer_softirq
> <...>-2265 1..s2 3422us : preempt_count_sub <-_raw_spin_unlock_irq
> <...>-2265 1..s1 3423us : rcu_bh_qs <-__do_softirq
> <...>-2265 1d.s1 3423us : irqtime_account_irq <-__do_softirq
> <...>-2265 1d.s1 3423us : __local_bh_enable <-__do_softirq
>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Ingo Molnar <mingo@...hat.com>
> Signed-off-by: Joel Fernandes <agnel.joel@...il.com>
> ---
> kernel/trace/trace_irqsoff.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
> index e4e5658..ca8f84f 100644
> --- a/kernel/trace/trace_irqsoff.c
> +++ b/kernel/trace/trace_irqsoff.c
> @@ -110,8 +110,20 @@ static int func_prolog_dec(struct trace_array *tr,
>
> local_save_flags(*flags);
> /* slight chance to get a false positive on tracing_cpu */
> - if (!irqs_disabled_flags(*flags))
> - return 0;
> + switch (trace_type) {
> + case (TRACER_IRQS_OFF | TRACER_PREEMPT_OFF):
> + if (!preempt_trace() && !irqs_disabled_flags(*flags))
> + return 0;
> + break;
> + case TRACER_IRQS_OFF:
> + if (!irqs_disabled_flags(*flags))
> + return 0;
> + break;
> + case TRACER_PREEMPT_OFF:
> + if (!preempt_trace())
> + return 0;
> + break;
> + }
Actually this is too complex a fix. If tracing_cpu is set, we want to
trace this cpu. And I'm starting to think we don't even need this
check, because I'm having a hard time seeing how tracing_cpu could be
set without preemption or irqs being disabled, and thus there be no
race.
Anyway, we can just add a check if either irqs are disabled or
interrupts are disabled:
if (!irqs_disabled_flags(*flags) && !preempt_count())
return 0;
-- Steve
>
> *data = per_cpu_ptr(tr->trace_buffer.data, cpu);
> disabled = atomic_inc_return(&(*data)->disabled);
Powered by blists - more mailing lists