[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170407105826.562b2e24@gandalf.local.home>
Date: Fri, 7 Apr 2017 10:58:26 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 0/5 v2] tracing: Add usecase of synchronize_rcu_tasks()
and stack_tracer_disable()
On Fri, 7 Apr 2017 07:43:35 -0700
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> On Fri, Apr 07, 2017 at 10:01:06AM -0400, Steven Rostedt wrote:
> > Paul,
> >
> > Here's my latest. You OK with it?
>
> Given your update to 3/5, I suspect that we could live with it. I am
> expecting some complaints about increases in idle-entry latency, but might
> be best to wait for complaints rather than complexifying too proactively.
We only added a this_cpu_inc() and this_cpu_dec() which are very fast
operations. I highly doubt it will be measurable. Although, I'm talking
about x86, IIRC, the this_cpu_inc/dec were be poorly written for other
archs in the past. I'm not sure if that was fixed though.
>
> That said, there isn't supposed to be any tracing during the now very
> small interval where RCU's idle-entry is incomplete. Mightn't it be
> better to (under CONFIG_PROVE_RCU or some such) give splats if tracing
> showed up in that interval?
>
Again, tracing is not the issue. I do function tracing in that location
without any problems. The issue here was the stack tracer.
Maybe we can create a new variable that is more cache local to the RCU
code.
What about calling it "rcu_disabled"? Then tracing that depends on RCU
can simply check that.
s/stack_trace_disable/disable_rcu/
s/stack_trace_enable/enable_rcu/
export a per cpu variable rcu_disabled
Then I can have the stack tracer check that variable as well. And we
could even put in a WARN_ON(this_cpu_read(rcu_disabled) in the
TRACE_EVENT() macros.
Thoughts?
-- Steve
Powered by blists - more mailing lists