[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230504194413.3806354-29-sashal@kernel.org>
Date: Thu, 4 May 2023 15:43:49 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Yafang <laoar.shao@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Jiri Olsa <olsajiri@...il.com>, Hao Luo <haoluo@...gle.com>,
Alexei Starovoitov <ast@...nel.org>,
Sasha Levin <sashal@...nel.org>, daniel@...earbox.net,
andrii@...nel.org, bpf@...r.kernel.org
Subject: [PATCH AUTOSEL 6.2 29/53] bpf: Add preempt_count_{sub,add} into btf id deny list
From: Yafang <laoar.shao@...il.com>
[ Upstream commit c11bd046485d7bf1ca200db0e7d0bdc4bafdd395 ]
The recursion check in __bpf_prog_enter* and __bpf_prog_exit*
leave preempt_count_{sub,add} unprotected. When attaching trampoline to
them we get panic as follows,
[ 867.843050] BUG: TASK stack guard page was hit at 0000000009d325cf (stack is 0000000046a46a15..00000000537e7b28)
[ 867.843064] stack guard page: 0000 [#1] PREEMPT SMP NOPTI
[ 867.843067] CPU: 8 PID: 11009 Comm: trace Kdump: loaded Not tainted 6.2.0+ #4
[ 867.843100] Call Trace:
[ 867.843101] <TASK>
[ 867.843104] asm_exc_int3+0x3a/0x40
[ 867.843108] RIP: 0010:preempt_count_sub+0x1/0xa0
[ 867.843135] __bpf_prog_enter_recur+0x17/0x90
[ 867.843148] bpf_trampoline_6442468108_0+0x2e/0x1000
[ 867.843154] ? preempt_count_sub+0x1/0xa0
[ 867.843157] preempt_count_sub+0x5/0xa0
[ 867.843159] ? migrate_enable+0xac/0xf0
[ 867.843164] __bpf_prog_exit_recur+0x2d/0x40
[ 867.843168] bpf_trampoline_6442468108_0+0x55/0x1000
...
[ 867.843788] preempt_count_sub+0x5/0xa0
[ 867.843793] ? migrate_enable+0xac/0xf0
[ 867.843829] __bpf_prog_exit_recur+0x2d/0x40
[ 867.843837] BUG: IRQ stack guard page was hit at 0000000099bd8228 (stack is 00000000b23e2bc4..000000006d95af35)
[ 867.843841] BUG: IRQ stack guard page was hit at 000000005ae07924 (stack is 00000000ffd69623..0000000014eb594c)
[ 867.843843] BUG: IRQ stack guard page was hit at 00000000028320f0 (stack is 00000000034b6438..0000000078d1bcec)
[ 867.843842] bpf_trampoline_6442468108_0+0x55/0x1000
...
That is because in __bpf_prog_exit_recur, the preempt_count_{sub,add} are
called after prog->active is decreased.
Fixing this by adding these two functions into btf ids deny list.
Suggested-by: Steven Rostedt <rostedt@...dmis.org>
Signed-off-by: Yafang <laoar.shao@...il.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Jiri Olsa <olsajiri@...il.com>
Acked-by: Hao Luo <haoluo@...gle.com>
Link: https://lore.kernel.org/r/20230413025248.79764-1-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/bpf/verifier.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 68455fd56eea5..c6e8be93594a7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16965,6 +16965,10 @@ BTF_ID(func, migrate_enable)
#if !defined CONFIG_PREEMPT_RCU && !defined CONFIG_TINY_RCU
BTF_ID(func, rcu_read_unlock_strict)
#endif
+#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
+BTF_ID(func, preempt_count_add)
+BTF_ID(func, preempt_count_sub)
+#endif
BTF_SET_END(btf_id_deny)
static int check_attach_btf_id(struct bpf_verifier_env *env)
--
2.39.2
Powered by blists - more mailing lists