[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ec34e08-a357-58d6-2ce4-c7472d8b0381@linux.alibaba.com>
Date: Tue, 12 Oct 2021 13:40:31 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Guo Ren <guoren@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
"James E.J. Bottomley" <James.Bottomley@...senPartnership.com>,
Helge Deller <deller@....de>,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jiri Kosina <jikos@...nel.org>,
Miroslav Benes <mbenes@...e.cz>,
Petr Mladek <pmladek@...e.com>,
Joe Lawrence <joe.lawrence@...hat.com>,
Colin Ian King <colin.king@...onical.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Nicholas Piggin <npiggin@...il.com>,
Jisheng Zhang <jszhang@...nel.org>, linux-csky@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-parisc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
live-patching@...r.kernel.org
Subject: [PATCH 2/2] ftrace: prevent preemption in perf_ftrace_function_call()
With CONFIG_DEBUG_PREEMPT we observed reports like:
BUG: using smp_processor_id() in preemptible
caller is perf_ftrace_function_call+0x6f/0x2e0
CPU: 1 PID: 680 Comm: a.out Not tainted
Call Trace:
<TASK>
dump_stack_lvl+0x8d/0xcf
check_preemption_disabled+0x104/0x110
? optimize_nops.isra.7+0x230/0x230
? text_poke_bp_batch+0x9f/0x310
perf_ftrace_function_call+0x6f/0x2e0
...
__text_poke+0x5/0x620
text_poke_bp_batch+0x9f/0x310
This telling us the CPU could be changed after task is preempted, and
the checking on CPU before preemption will be invalid.
This patch just turn off preemption in perf_ftrace_function_call()
to prevent CPU changing.
CC: Steven Rostedt <rostedt@...dmis.org>
Reported-by: Abaci <abaci@...ux.alibaba.com>
Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
---
kernel/trace/trace_event_perf.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 6aed10e..33c2f76 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -441,12 +441,19 @@ void perf_trace_buf_update(void *record, u16 type)
if (!rcu_is_watching())
return;
+ /*
+ * Prevent CPU changing from now on. rcu must
+ * be in watching if the task was migrated and
+ * scheduled.
+ */
+ preempt_disable_notrace();
+
if ((unsigned long)ops->private != smp_processor_id())
- return;
+ goto out;
bit = ftrace_test_recursion_trylock(ip, parent_ip);
if (bit < 0)
- return;
+ goto out;
event = container_of(ops, struct perf_event, ftrace_ops);
@@ -468,16 +475,18 @@ void perf_trace_buf_update(void *record, u16 type)
entry = perf_trace_buf_alloc(ENTRY_SIZE, NULL, &rctx);
if (!entry)
- goto out;
+ goto unlock;
entry->ip = ip;
entry->parent_ip = parent_ip;
perf_trace_buf_submit(entry, ENTRY_SIZE, rctx, TRACE_FN,
1, ®s, &head, NULL);
-out:
+unlock:
ftrace_test_recursion_unlock(bit);
#undef ENTRY_SIZE
+out:
+ preempt_enable_notrace();
}
static int perf_ftrace_function_register(struct perf_event *event)
--
1.8.3.1
Powered by blists - more mailing lists