[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100406101925.GD5147@nowhere>
Date: Tue, 6 Apr 2010 12:19:28 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: David Miller <davem@...emloft.net>,
Steven Rostedt <rostedt@...dmis.org>
Cc: sparclinux@...r.kernel.org, linux-kernel@...r.kernel.org,
mingo@...e.hu, acme@...hat.com, a.p.zijlstra@...llo.nl,
paulus@...ba.org
Subject: Re: Random scheduler/unaligned accesses crashes with perf lock
events on sparc 64
On Tue, Apr 06, 2010 at 02:50:49AM -0700, David Miller wrote:
> From: Frederic Weisbecker <fweisbec@...il.com>
> Date: Mon, 5 Apr 2010 21:40:58 +0200
>
> > It happens without CONFIG_FUNCTION_TRACER as well (but it happens
> > when the function tracer runs). And I hadn't your
> > perf_arch_save_caller_regs() when I triggered this.
>
> I figured out the problem, it's NMIs. As soon as I disable all of the
> NMI watchdog code, the problem goes away.
>
> This is because some parts of the NMI interrupt handling path are not
> marked with "notrace" and the various tracer code paths use
> local_irq_disable() (either directly or indirectly) which doesn't work
> with sparc64's NMI scheme. These essentially turn NMIs back on in the
> NMI handler before the NMI condition has been cleared, and thus we can
> re-enter with another NMI interrupt.
>
> We went through this for perf events, and we just made sure that
> local_irq_{enable,disable}() never occurs in any of the code paths in
> perf events that can be reached via the NMI interrupt handler. (the
> only one we had was sched_clock() and that was easily fixed)
>
> So, the first mcount hit we get is for rcu_nmi_enter() via
> nmi_enter().
>
> I can see two ways to handle this:
>
> 1) Pepper 'notrace' markers onto rcu_nmi_enter(), rcu_nmi_exit()
> and whatever else I can see getting hit in the NMI interrupt
> handler code paths.
>
> 2) Add a hack to __raw_local_irq_save() that keeps it from writing
> anything to the interrupt level register if we have NMI's disabled.
> (this puts the cost on the entire kernel instead of just the NMI
> paths).
>
> #1 seems to be the intent on other platforms, the majority of the NMI
> code paths are protected with 'notrace' on x86, I bet nobody noticed
> that nmi_enter() when CONFIG_NO_HZ && !CONFIG_TINY_RCU ends up calling
> a function that does tracing.
>
> The next one we'll hit is atomic_notifier_call_chain() (amusingly
> notify_die() is marked 'notrace' but the one thing it calls isn't)
>
> For example, the following are the generic notrace annotations I
> would need to get sparc64 ftrace functioning again. (Frederic I will
> send you the full patch with the sparc specific bits under seperate
> cover in so that you can test things...)
>
> --------------------
> kernel: Add notrace annotations to common routines invoked via NMI.
>
> This includes the atomic notifier call chain as well as the RCU
> specific NMI enter/exit handlers.
Ok, but this as a cause looks weird.
The function tracer handler disables interrupts. I don't remember exactly
why but we also have a no-preempt mode that only disables preemption instead:
(function_trace_call_preempt_only())
It means having such interrupt reentrancy is not a problem. In fact, the
function tracer is not reentrant:
data = tr->data[cpu];
disabled = atomic_inc_return(&data->disabled);
if (likely(disabled == 1))
trace_function(tr, ip, parent_ip, flags, pc);
atomic_dec(&data->disabled);
we do this just to prevent from tracing recursion (in case we have
a traceable function in the inner function tracing path).
Nmis are just supposed to be fine with the function tracer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists