[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20240607131930.12002-1-wojciech.gladysz@infogain.com>
Date: Fri, 7 Jun 2024 15:19:30 +0200
From: Wojciech Gładysz <wojciech.gladysz@...ogain.com>
To: song@...nel.org,
jolsa@...nel.org,
ast@...nel.org,
daniel@...earbox.net,
andrii@...nel.org,
martin.lau@...ux.dev,
eddyz87@...il.com,
yonghong.song@...ux.dev,
john.fastabend@...il.com,
kpsingh@...nel.org,
sdf@...gle.com,
haoluo@...gle.com,
rostedt@...dmis.org,
mhiramat@...nel.org,
mathieu.desnoyers@...icios.com,
bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: Wojciech Gładysz <wojciech.gladysz@...ogain.com>,
syzbot+9d95beb2a3c260622518@...kaller.appspotmail.com
Subject: [PATCH] kernel/trace: fix possible deadlock in trie_delete_elem
On bpf syscall map operations the bpf_disable_instrumentation function
is called for the reason described in the comment to the function.
The description matches the bug case. The function increments a per CPU
integer variable bpf_prog_active. The variable is not processed in the
bpf trace path. The fix implements a similar processing as for kprobe
handling. The fix degrades the bpf tracing by skipping some eBPF trace
sequences that otherwise might yield deadlock.
Reported-by: syzbot+9d95beb2a3c260622518@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=9d95beb2a3c260622518
Link: https://lore.kernel.org/all/000000000000adb08b061413919e@google.com/T/
Signed-off-by: Wojciech Gładysz <wojciech.gladysz@...ogain.com>
---
kernel/trace/bpf_trace.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6249dac61701..8de2e084b162 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2391,7 +2391,9 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
struct bpf_trace_run_ctx run_ctx;
cant_sleep();
- if (unlikely(this_cpu_inc_return(*(prog->active)) != 1)) {
+
+ // if the instrumentation is not disabled disable recurrence and go
+ if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
bpf_prog_inc_misses_counter(prog);
goto out;
}
@@ -2405,7 +2407,7 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
bpf_reset_run_ctx(old_run_ctx);
out:
- this_cpu_dec(*(prog->active));
+ __this_cpu_dec(bpf_prog_active);
}
#define UNPACK(...) __VA_ARGS__
--
2.35.3
Powered by blists - more mailing lists