[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110512214336.GE7410@nowhere>
Date: Thu, 12 May 2011 23:43:39 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely
callable from scheduler
On Thu, May 12, 2011 at 11:28:10PM +0200, Ingo Molnar wrote:
>
> * Frederic Weisbecker <fweisbec@...il.com> wrote:
>
> > Avoid potential scheduler recursion and deadlock from the
> > stacktrace code by avoiding rescheduling when we re-enable
> > preemption.
> >
> > This robustifies some scheduler trace events like sched switch
> > when they are used to produce callchains in perf or ftrace.
>
> > - put_cpu();
> > +
> > + /* We want stacktrace to be computable anywhere, even in the scheduler */
> > + preempt_enable_no_resched();
>
> So what happens if a callchain profiling happens to be interrupted by a hardirq
> and the interrupt reschedules the current task? We'll miss the reschedule,
> right?
>
> preempt_enable_no_resched() is not a magic 'solve scheduler recursions' bullet
> - it's to be used only if something else will guarantee the preemption check!
> But nothing guarantees it here AFAICS.
>
> A better fix would be to use local_irq_save()/restore().
Good point, but then lockdep itself might trigger a stacktrace from local_irq_save,
leading to a stacktrace recursion.
I can use raw_local_irq_disable(), or may be have a stacktrace recursion protection.
I fear the second solution could lead us to potentially lose useful information
if a stacktrace interrupts another one. Ok these are extreme cases...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists