[<prev] [next>] [day] [month] [year] [list]
Message-ID: <158923076164.390.8535049254984522740.tip-bot2@tip-bot2>
Date: Mon, 11 May 2020 20:59:21 -0000
From: "tip-bot2 for Paul E. McKenney" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
"Paul E. McKenney" <paulmck@...nel.org>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: core/rcu] ftrace: Use synchronize_rcu_tasks_rude() instead of
ftrace_sync()
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: e5a971d76d701dbff9e5dbaa84dc9e8c3081a867
Gitweb: https://git.kernel.org/tip/e5a971d76d701dbff9e5dbaa84dc9e8c3081a867
Author: Paul E. McKenney <paulmck@...nel.org>
AuthorDate: Fri, 03 Apr 2020 12:10:28 -07:00
Committer: Paul E. McKenney <paulmck@...nel.org>
CommitterDate: Mon, 27 Apr 2020 11:03:53 -07:00
ftrace: Use synchronize_rcu_tasks_rude() instead of ftrace_sync()
This commit replaces the schedule_on_each_cpu(ftrace_sync) instances
with synchronize_rcu_tasks_rude().
Suggested-by: Steven Rostedt <rostedt@...dmis.org>
Cc: Ingo Molnar <mingo@...hat.com>
[ paulmck: Make Kconfig adjustments noted by kbuild test robot. ]
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/trace/Kconfig | 1 +
kernel/trace/ftrace.c | 17 +++--------------
2 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 402eef8..ae69010 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -158,6 +158,7 @@ config FUNCTION_TRACER
select CONTEXT_SWITCH_TRACER
select GLOB
select TASKS_RCU if PREEMPTION
+ select TASKS_RUDE_RCU
help
Enable the kernel to trace every kernel function. This is done
by using a compiler feature to insert a small, 5-byte No-Operation
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 041694a..771eace 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -160,17 +160,6 @@ static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip,
op->saved_func(ip, parent_ip, op, regs);
}
-static void ftrace_sync(struct work_struct *work)
-{
- /*
- * This function is just a stub to implement a hard force
- * of synchronize_rcu(). This requires synchronizing
- * tasks even in userspace and idle.
- *
- * Yes, function tracing is rude.
- */
-}
-
static void ftrace_sync_ipi(void *data)
{
/* Probably not needed, but do it anyway */
@@ -256,7 +245,7 @@ static void update_ftrace_function(void)
* Make sure all CPUs see this. Yes this is slow, but static
* tracing is slow and nasty to have enabled.
*/
- schedule_on_each_cpu(ftrace_sync);
+ synchronize_rcu_tasks_rude();
/* Now all cpus are using the list ops. */
function_trace_op = set_function_trace_op;
/* Make sure the function_trace_op is visible on all CPUs */
@@ -2932,7 +2921,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
* infrastructure to do the synchronization, thus we must do it
* ourselves.
*/
- schedule_on_each_cpu(ftrace_sync);
+ synchronize_rcu_tasks_rude();
/*
* When the kernel is preeptive, tasks can be preempted
@@ -5887,7 +5876,7 @@ ftrace_graph_release(struct inode *inode, struct file *file)
* infrastructure to do the synchronization, thus we must do it
* ourselves.
*/
- schedule_on_each_cpu(ftrace_sync);
+ synchronize_rcu_tasks_rude();
free_ftrace_hash(old_hash);
}
Powered by blists - more mailing lists