[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201005142115.662558364@linuxfoundation.org>
Date: Mon, 5 Oct 2020 17:26:15 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Paul McKenney <paulmck@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Steven Rostedt (VMware)" <rostedt@...dmis.org>
Subject: [PATCH 5.8 19/85] ftrace: Move RCU is watching check after recursion check
From: Steven Rostedt (VMware) <rostedt@...dmis.org>
commit b40341fad6cc2daa195f8090fd3348f18fff640a upstream.
The first thing that the ftrace function callback helper functions should do
is to check for recursion. Peter Zijlstra found that when
"rcu_is_watching()" had its notrace removed, it caused perf function tracing
to crash. This is because the call of rcu_is_watching() is tested before
function recursion is checked and and if it is traced, it will cause an
infinite recursion loop.
rcu_is_watching() should still stay notrace, but to prevent this should
never had crashed in the first place. The recursion prevention must be the
first thing done in callback functions.
Link: https://lore.kernel.org/r/20200929112541.GM2628@hirez.programming.kicks-ass.net
Cc: stable@...r.kernel.org
Cc: Paul McKenney <paulmck@...nel.org>
Fixes: c68c0fa293417 ("ftrace: Have ftrace_ops_get_func() handle RCU and PER_CPU flags too")
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reported-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
kernel/trace/ftrace.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6877,16 +6877,14 @@ static void ftrace_ops_assist_func(unsig
{
int bit;
- if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching())
- return;
-
bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
if (bit < 0)
return;
preempt_disable_notrace();
- op->func(ip, parent_ip, op, regs);
+ if (!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching())
+ op->func(ip, parent_ip, op, regs);
preempt_enable_notrace();
trace_clear_recursion(bit);
Powered by blists - more mailing lists