[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181205183303.828422192@goodmis.org>
Date: Wed, 05 Dec 2018 12:48:53 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org
Cc: catalin.marinas@....com, will.deacon@....com,
keescook@...omium.org, arnd@...db.de,
linux-arm-kernel@...ts.infradead.org,
Anders Roxell <anders.roxell@...aro.org>,
Ingo Molnar <mingo@...nel.org>
Subject: [PATCH 1/2] ftrace: Allow ftrace_replace_code() to be schedulable
From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>
The function ftrace_replace_code() is the ftrace engine that does the
work to modify all the nops into the calls to the function callback in
all the functions being traced.
The generic version which is normally called from stop machine, but an
architecture can implement a non stop machine version and still use the
generic ftrace_replace_code(). When an architecture does this,
ftrace_replace_code() may be called from a schedulable context, where
it can allow the code to be preemptible, and schedule out.
In order to allow an architecture to make ftrace_replace_code()
schedulable, a new command flag is added called:
FTRACE_SCHEDULABLE
Which can be or'd to the command that is passed to
ftrace_modify_all_code() that calls ftrace_replace_code() and will have
it call cond_resched() in the loop that modifies the nops into the
calls to the ftrace trampolines.
Link: http://lkml.kernel.org/r/20181204192903.8193-1-anders.roxell@linaro.org
Reported-by: Anders Roxell <anders.roxell@...aro.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 19 ++++++++++++++++---
2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index dd16e8218db3..c281b16baef9 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -389,6 +389,7 @@ enum {
FTRACE_UPDATE_TRACE_FUNC = (1 << 2),
FTRACE_START_FUNC_RET = (1 << 3),
FTRACE_STOP_FUNC_RET = (1 << 4),
+ FTRACE_SCHEDULABLE = (1 << 5),
};
/*
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 77734451cb05..74fdcacba514 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -77,6 +77,11 @@
#define ASSIGN_OPS_HASH(opsname, val)
#endif
+enum {
+ FTRACE_MODIFY_ENABLE_FL = (1 << 0),
+ FTRACE_MODIFY_SCHEDULABLE_FL = (1 << 1),
+};
+
static struct ftrace_ops ftrace_list_end __read_mostly = {
.func = ftrace_stub,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB,
@@ -2415,10 +2420,12 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable)
return -1; /* unknow ftrace bug */
}
-void __weak ftrace_replace_code(int enable)
+void __weak ftrace_replace_code(int mod_flags)
{
struct dyn_ftrace *rec;
struct ftrace_page *pg;
+ int enable = mod_flags & FTRACE_MODIFY_ENABLE_FL;
+ int schedulable = mod_flags & FTRACE_MODIFY_SCHEDULABLE_FL;
int failed;
if (unlikely(ftrace_disabled))
@@ -2435,6 +2442,8 @@ void __weak ftrace_replace_code(int enable)
/* Stop processing */
return;
}
+ if (schedulable)
+ cond_resched();
} while_for_each_ftrace_rec();
}
@@ -2548,8 +2557,12 @@ int __weak ftrace_arch_code_modify_post_process(void)
void ftrace_modify_all_code(int command)
{
int update = command & FTRACE_UPDATE_TRACE_FUNC;
+ int mod_flags = 0;
int err = 0;
+ if (command & FTRACE_SCHEDULABLE)
+ mod_flags = FTRACE_MODIFY_SCHEDULABLE_FL;
+
/*
* If the ftrace_caller calls a ftrace_ops func directly,
* we need to make sure that it only traces functions it
@@ -2567,9 +2580,9 @@ void ftrace_modify_all_code(int command)
}
if (command & FTRACE_UPDATE_CALLS)
- ftrace_replace_code(1);
+ ftrace_replace_code(mod_flags | FTRACE_MODIFY_ENABLE_FL);
else if (command & FTRACE_DISABLE_CALLS)
- ftrace_replace_code(0);
+ ftrace_replace_code(mod_flags);
if (update && ftrace_trace_function != ftrace_ops_list_func) {
function_trace_op = set_function_trace_op;
--
2.19.1
Powered by blists - more mailing lists