[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150128100446.64c91b8b@gandalf.local.home>
Date: Wed, 28 Jan 2015 10:04:46 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC PATCH 2/4] sched: Use traced preempt count operations to
toggle PREEMPT_ACTIVE
On Wed, 28 Jan 2015 14:59:43 +0100
Frederic Weisbecker <fweisbec@...il.com> wrote:
> On Tue, Jan 27, 2015 at 08:42:39PM -0500, Steven Rostedt wrote:
> > On Wed, 28 Jan 2015 01:24:10 +0100
> > Frederic Weisbecker <fweisbec@...il.com> wrote:
> >
> > > d1f74e20b5b064a130cd0743a256c2d3cfe84010 turned PREEMPT_ACTIVE modifiers
> > > to use raw untraced preempt count operations. Meanwhile this prevents
> > > from debugging and tracing preemption disabled if we pull that
> > > responsibility to schedule() callers (see following patches).
> > >
> > > Is there anything we can do about that?
> > >
> >
> > I'm trying to understand how you solved the recursion issue with
> > preempt_schedule()?
>
> I don't solve it, I rather outline the issue to make sure it's still
> a problem today. I can keep the non-traced API but we'll loose debuggability
> and latency measurement in preempt_schedule(). But I think this was already
> the case before my patchset.
>
> >
> > Here's what happens:
> >
> > preempt_schedule()
> > preempt_count_add() -> gets traced by function tracer
> > function_trace_call()
> > preempt_disable_notrace()
> > [...]
> > preempt_enable_notrace() -> sees NEED_RESCHED set
> > preempt_schedule()
> > preempt_count_add() -> gets traced
> > function_trace_call()
> > preempt_disable_notrace()
> > preempt_enable_notrace() -> sees NEED_RESCHED set
> >
> > [etc etc until BOOM!]
> >
> > Perhaps if you can find a way to clear NEED_RECHED before disabling
> > preemption, then it would work. But I don't see that in the patch
> > series.
>
> Something like this in function tracing?
>
> prev_resched = need_resched();
> do_function_tracing()
> preempt_disable()
> .....
> preempt_enable_no_resched()
>
> if (!prev_resched && need_resched())
> preempt_schedule()
>
> Sounds like a good idea. More overhead but maybe more stability.
>
HAHAHAHAH! Yeah, if I want another pounding by Thomas.
That's exactly what I had originally, when Thomas pointed out that
there's a race there that could have need_resched set just before
calling the function tracer, and we miss the preemption check and never
schedule on time. That took Thomas some time to figure out, and he was
really pissed when he discovered it was because of the function tracer.
So, no, that wont work.
Maybe, we just have a per_cpu variable that we set in
preempt_schedule() that the function tracer can read, and say "do not
resched here".
Another solution is to hard code the preempt trace just after the
__preempt_disable() that would simulate the tracing if it was enabled.
I had patches to do that a long time ago, but Peter didn't like them.
Although, I think it was because I used a trick with the
stop_critical_timings, but I think a cleaner approach is a custom
version made for this exact purpose would be better.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists