[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211115165431.231469268@linuxfoundation.org>
Date: Mon, 15 Nov 2021 17:56:48 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Guo Ren <guoren@...nel.org>, Ingo Molnar <mingo@...hat.com>,
"James E.J. Bottomley" <James.Bottomley@...senPartnership.com>,
Helge Deller <deller@....de>,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jiri Kosina <jikos@...nel.org>,
Miroslav Benes <mbenes@...e.cz>,
Petr Mladek <pmladek@...e.com>,
Joe Lawrence <joe.lawrence@...hat.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Nicholas Piggin <npiggin@...il.com>,
Jisheng Zhang <jszhang@...nel.org>,
Abaci <abaci@...ux.alibaba.com>,
Michael Wang <yun.wang@...ux.alibaba.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.14 325/849] ftrace: do CPU checking after preemption disabled
From: 王贇 <yun.wang@...ux.alibaba.com>
[ Upstream commit d33cc657372366a8959f099c619a208b4c5dc664 ]
With CONFIG_DEBUG_PREEMPT we observed reports like:
BUG: using smp_processor_id() in preemptible
caller is perf_ftrace_function_call+0x6f/0x2e0
CPU: 1 PID: 680 Comm: a.out Not tainted
Call Trace:
<TASK>
dump_stack_lvl+0x8d/0xcf
check_preemption_disabled+0x104/0x110
? optimize_nops.isra.7+0x230/0x230
? text_poke_bp_batch+0x9f/0x310
perf_ftrace_function_call+0x6f/0x2e0
...
__text_poke+0x5/0x620
text_poke_bp_batch+0x9f/0x310
This telling us the CPU could be changed after task is preempted, and
the checking on CPU before preemption will be invalid.
Since now ftrace_test_recursion_trylock() will help to disable the
preemption, this patch just do the checking after trylock() to address
the issue.
Link: https://lkml.kernel.org/r/54880691-5fe2-33e7-d12f-1fa6136f5183@linux.alibaba.com
CC: Steven Rostedt <rostedt@...dmis.org>
Cc: Guo Ren <guoren@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "James E.J. Bottomley" <James.Bottomley@...senPartnership.com>
Cc: Helge Deller <deller@....de>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Paul Mackerras <paulus@...ba.org>
Cc: Paul Walmsley <paul.walmsley@...ive.com>
Cc: Palmer Dabbelt <palmer@...belt.com>
Cc: Albert Ou <aou@...s.berkeley.edu>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Borislav Petkov <bp@...en8.de>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Jiri Kosina <jikos@...nel.org>
Cc: Miroslav Benes <mbenes@...e.cz>
Cc: Petr Mladek <pmladek@...e.com>
Cc: Joe Lawrence <joe.lawrence@...hat.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: Jisheng Zhang <jszhang@...nel.org>
Reported-by: Abaci <abaci@...ux.alibaba.com>
Signed-off-by: Michael Wang <yun.wang@...ux.alibaba.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/trace/trace_event_perf.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 03be4435d103f..50cd5a1a7ab4a 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -441,13 +441,13 @@ perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip,
if (!rcu_is_watching())
return;
- if ((unsigned long)ops->private != smp_processor_id())
- return;
-
bit = ftrace_test_recursion_trylock(ip, parent_ip);
if (bit < 0)
return;
+ if ((unsigned long)ops->private != smp_processor_id())
+ goto out;
+
event = container_of(ops, struct perf_event, ftrace_ops);
/*
--
2.33.0
Powered by blists - more mailing lists