[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130529015339.GX6172@linux.vnet.ibm.com>
Date: Tue, 28 May 2013 18:53:39 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/2] ftrace: Use the rcu _notrace variants for
rcu_dereference_raw() and friends
On Tue, May 28, 2013 at 02:38:43PM -0400, Steven Rostedt wrote:
> As rcu_dereference_raw() under RCU debug config options can add quite a
> bit of checks, and that tracing uses rcu_dereference_raw(), these checks
> happen with the function tracer. The function tracer also happens to trace
> these debug checks too. This added overhead can livelock the system.
>
> Have the function tracer use the new RCU _notrace equivalents that do
> not do the debug checks for RCU.
>
> Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
Looks good to me!
Acked-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Index: linux-trace.git/kernel/trace/ftrace.c
> ===================================================================
> --- linux-trace.git.orig/kernel/trace/ftrace.c
> +++ linux-trace.git/kernel/trace/ftrace.c
> @@ -120,22 +120,22 @@ static void ftrace_ops_no_ops(unsigned l
>
> /*
> * Traverse the ftrace_global_list, invoking all entries. The reason that we
> - * can use rcu_dereference_raw() is that elements removed from this list
> + * can use rcu_dereference_raw_notrace() is that elements removed from this list
> * are simply leaked, so there is no need to interact with a grace-period
> - * mechanism. The rcu_dereference_raw() calls are needed to handle
> + * mechanism. The rcu_dereference_raw_notrace() calls are needed to handle
> * concurrent insertions into the ftrace_global_list.
> *
> * Silly Alpha and silly pointer-speculation compiler optimizations!
> */
> #define do_for_each_ftrace_op(op, list) \
> - op = rcu_dereference_raw(list); \
> + op = rcu_dereference_raw_notrace(list); \
> do
>
> /*
> * Optimized for just a single item in the list (as that is the normal case).
> */
> #define while_for_each_ftrace_op(op) \
> - while (likely(op = rcu_dereference_raw((op)->next)) && \
> + while (likely(op = rcu_dereference_raw_notrace((op)->next)) && \
> unlikely((op) != &ftrace_list_end))
>
> static inline void ftrace_ops_init(struct ftrace_ops *ops)
> @@ -779,7 +779,7 @@ ftrace_find_profiled_func(struct ftrace_
> if (hlist_empty(hhd))
> return NULL;
>
> - hlist_for_each_entry_rcu(rec, hhd, node) {
> + hlist_for_each_entry_rcu_notrace(rec, hhd, node) {
> if (rec->ip == ip)
> return rec;
> }
> @@ -1165,7 +1165,7 @@ ftrace_lookup_ip(struct ftrace_hash *has
>
> hhd = &hash->buckets[key];
>
> - hlist_for_each_entry_rcu(entry, hhd, hlist) {
> + hlist_for_each_entry_rcu_notrace(entry, hhd, hlist) {
> if (entry->ip == ip)
> return entry;
> }
> @@ -1422,8 +1422,8 @@ ftrace_ops_test(struct ftrace_ops *ops,
> struct ftrace_hash *notrace_hash;
> int ret;
>
> - filter_hash = rcu_dereference_raw(ops->filter_hash);
> - notrace_hash = rcu_dereference_raw(ops->notrace_hash);
> + filter_hash = rcu_dereference_raw_notrace(ops->filter_hash);
> + notrace_hash = rcu_dereference_raw_notrace(ops->notrace_hash);
>
> if ((ftrace_hash_empty(filter_hash) ||
> ftrace_lookup_ip(filter_hash, ip)) &&
> @@ -2920,7 +2920,7 @@ static void function_trace_probe_call(un
> * on the hash. rcu_read_lock is too dangerous here.
> */
> preempt_disable_notrace();
> - hlist_for_each_entry_rcu(entry, hhd, node) {
> + hlist_for_each_entry_rcu_notrace(entry, hhd, node) {
> if (entry->ip == ip)
> entry->ops->func(ip, parent_ip, &entry->data);
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists