[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230510122045.2259-1-zegao@tencent.com>
Date: Wed, 10 May 2023 20:20:45 +0800
From: Ze Gao <zegao2021@...il.com>
To: Song Liu <song@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>
Cc: Ze Gao <zegao@...cent.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org
Subject: [PATCH] bpf: reject blacklisted symbols in kprobe_multi to avoid recursive trap
BPF_LINK_TYPE_KPROBE_MULTI attaches kprobe programs through fprobe,
however it does not takes those kprobe blacklisted into consideration,
which likely introduce recursive traps and blows up stacks.
this patch adds simple check and remove those are in kprobe_blacklist
from one fprobe during bpf_kprobe_multi_link_attach. And also
check_kprobe_address_safe is open for more future checks.
note that ftrace provides recursion detection mechanism, but for kprobe
only, we can directly reject those cases early without turning to ftrace.
Signed-off-by: Ze Gao <zegao@...cent.com>
---
kernel/trace/bpf_trace.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9a050e36dc6c..44c68bc06bbd 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2764,6 +2764,37 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3
return arr.mods_cnt;
}
+static inline int check_kprobe_address_safe(unsigned long addr)
+{
+ if (within_kprobe_blacklist(addr))
+ return -EINVAL;
+ else
+ return 0;
+}
+
+static int check_bpf_kprobe_addrs_safe(unsigned long *addrs, int num)
+{
+ int i, cnt;
+ char symname[KSYM_NAME_LEN];
+
+ for (i = 0; i < num; ++i) {
+ if (check_kprobe_address_safe((unsigned long)addrs[i])) {
+ lookup_symbol_name(addrs[i], symname);
+ pr_warn("bpf_kprobe: %s at %lx is blacklisted\n", symname, addrs[i]);
+ /* mark blacklisted symbol for remove */
+ addrs[i] = 0;
+ }
+ }
+
+ /* remove blacklisted symbol from addrs */
+ for (i = 0, cnt = 0; i < num; ++i) {
+ if (addrs[i])
+ addrs[cnt++] = addrs[i];
+ }
+
+ return cnt;
+}
+
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
struct bpf_kprobe_multi_link *link = NULL;
@@ -2859,6 +2890,12 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
else
link->fp.entry_handler = kprobe_multi_link_handler;
+ cnt = check_bpf_kprobe_addrs_safe(addrs, cnt);
+ if (!cnt) {
+ err = -EINVAL;
+ goto error;
+ }
+
link->addrs = addrs;
link->cookies = cookies;
link->cnt = cnt;
--
2.40.1
Powered by blists - more mailing lists