[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201112003334.032370380@goodmis.org>
Date: Wed, 11 Nov 2020 19:32:50 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jiri Kosina <jikos@...nel.org>,
Joe Lawrence <joe.lawrence@...hat.com>,
live-patching@...r.kernel.org, Petr Mladek <pmladek@...e.com>,
Miroslav Benes <mbenes@...e.cz>
Subject: [for-next][PATCH 06/17] livepatch/ftrace: Add recursion protection to the ftrace callback
From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.
The default for ftrace_ops is going to change. It will expect that handlers
provide their own recursion protection, unless its ftrace_ops states
otherwise.
Link: https://lkml.kernel.org/r/20201028115613.291169246@goodmis.org
Link: https://lkml.kernel.org/r/20201106023547.122802424@goodmis.org
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Jiri Kosina <jikos@...nel.org>
Cc: Joe Lawrence <joe.lawrence@...hat.com>
Cc: live-patching@...r.kernel.org
Reviewed-by: Petr Mladek <pmladek@...e.com>
Acked-by: Miroslav Benes <mbenes@...e.cz>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
kernel/livepatch/patch.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index b552cf2d85f8..6c0164d24bbd 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -45,9 +45,13 @@ static void notrace klp_ftrace_handler(unsigned long ip,
struct klp_ops *ops;
struct klp_func *func;
int patch_state;
+ int bit;
ops = container_of(fops, struct klp_ops, fops);
+ bit = ftrace_test_recursion_trylock();
+ if (bit < 0)
+ return;
/*
* A variant of synchronize_rcu() is used to allow patching functions
* where RCU is not watching, see klp_synchronize_transition().
@@ -117,6 +121,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
unlock:
preempt_enable_notrace();
+ ftrace_test_recursion_unlock(bit);
}
/*
--
2.28.0
Powered by blists - more mailing lists