[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130605115147.GF8923@twins.programming.kicks-ass.net>
Date: Wed, 5 Jun 2013 13:51:47 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: "Paul E. McKenney" <paulmck@...ibm.com>,
LKML <linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Jiri Olsa <jolsa@...hat.com>
Subject: Re: [RFC][PATCH] ftrace: Use schedule_on_each_cpu() as a heavy
synchronize_sched()
On Tue, May 28, 2013 at 08:01:16PM -0400, Steven Rostedt wrote:
> The function tracer uses preempt_disable/enable_notrace() for
> synchronization between reading registered ftrace_ops and unregistering
> them.
>
> Most of the ftrace_ops are global permanent structures that do not
> require this synchronization. That is, ops may be added and removed from
> the hlist but are never freed, and wont hurt if a synchronization is
> missed.
>
> But this is not true for dynamically created ftrace_ops or control_ops,
> which are used by the perf function tracing.
>
> The problem here is that the function tracer can be used to trace
> kernel/user context switches as well as going to and from idle.
> Basically, it can be used to trace blind spots of the RCU subsystem.
> This means that even though preempt_disable() is done, a
> synchronize_sched() will ignore CPUs that haven't made it out of user
> space or idle. These can include functions that are being traced just
> before entering or exiting the kernel sections.
>
> To implement the RCU synchronization, instead of using
> synchronize_sched() the use of schedule_on_each_cpu() is performed. This
> means that when a dynamically allocated ftrace_ops, or a control ops is
> being unregistered, all CPUs must be touched and execute a ftrace_sync()
> stub function via the work queues. This will rip CPUs out from idle or
> in dynamic tick mode. This only happens when a user disables perf
> function tracing or other dynamically allocated function tracers, but it
> allows us to continue to debug RCU and context tracking with function
> tracing.
>
> Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
>
> Index: linux-trace.git/kernel/trace/ftrace.c
> ===================================================================
> --- linux-trace.git.orig/kernel/trace/ftrace.c
> +++ linux-trace.git/kernel/trace/ftrace.c
> @@ -413,6 +413,17 @@ static int __register_ftrace_function(st
> return 0;
> }
>
> +static void ftrace_sync(struct work_struct *work)
> +{
> + /*
> + * This function is just a stub to implement a hard force
> + * of synchronize_sched(). This requires synchronizing
> + * tasks even in userspace and idle.
> + *
> + * Yes, function tracing is rude.
> + */
> +}
> +
> static int __unregister_ftrace_function(struct ftrace_ops *ops)
> {
> int ret;
> @@ -440,8 +451,12 @@ static int __unregister_ftrace_function(
> * so there'll be no new users. We must ensure
> * all current users are done before we free
> * the control data.
> + * Note synchronize_sched() is not enough, as we
> + * use preempt_disable() to do RCU, but the function
> + * tracer can be called where RCU is not active
> + * (before user_exit()).
> */
> - synchronize_sched();
> + schedule_on_each_cpu(ftrace_sync);
> control_ops_free(ops);
> }
> } else
> @@ -456,9 +471,13 @@ static int __unregister_ftrace_function(
> /*
> * Dynamic ops may be freed, we must make sure that all
> * callers are done before leaving this function.
> + *
> + * Again, normal synchronize_sched() is not good enough.
> + * We need to do a hard force of sched synchronization.
> */
> if (ops->flags & FTRACE_OPS_FL_DYNAMIC)
> - synchronize_sched();
> + schedule_on_each_cpu(ftrace_sync);
> +
>
> return 0;
> }
>
So what happens if schedule_on_each_cpu() returns -ENOMEM? :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists