[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20260128061753.1857803-1-wangqing7171@gmail.com>
Date: Wed, 28 Jan 2026 14:17:53 +0800
From: Qing Wang <wangqing7171@...il.com>
To: Song Liu <song@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
peterz@...radead.org,
acme@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>,
Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>,
Hao Luo <haoluo@...gle.com>
Cc: bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org,
Qing Wang <wangqing7171@...il.com>,
syzbot+72a43cdb78469f7fbad1@...kaller.appspotmail.com
Subject: [PATCH] bpf/perf: Fix suspicious RCU usage in get_callchain_entry()
There is a patch intended to fix suspicious RCU usage in get_callchain_entry(),
but it is incorrect. Specifically, rcu_read_lock()/rcu_read_unlock() is not
called when may_fault == false.
Previous discussion:
https://lore.kernel.org/all/CAEf4BzaYL9zZN8TZyRHW3_O3vbHc7On+NSunrkDvDQx2=wwyRw@mail.gmail.com/#R
For perf's callchain, rcu_read_lock()/rcu_read_unlock() should be called when
trace_in == false.
Fixes: d4dd9775ec24 ("bpf: wire up sleepable bpf_get_stack() and bpf_get_task_stack() helpers")
Reported-by: syzbot+72a43cdb78469f7fbad1@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=72a43cdb78469f7fbad1
Tested-by: syzbot+72a43cdb78469f7fbad1@...kaller.appspotmail.com
Signed-off-by: Qing Wang <wangqing7171@...il.com>
---
kernel/bpf/stackmap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index da3d328f5c15..f97d4aa9d038 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -460,7 +460,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
max_depth = stack_map_calculate_max_depth(size, elem_size, flags);
- if (may_fault)
+ if (!trace_in)
rcu_read_lock(); /* need RCU for perf's callchain below */
if (trace_in) {
@@ -474,7 +474,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
}
if (unlikely(!trace) || trace->nr < skip) {
- if (may_fault)
+ if (!trace_in)
rcu_read_unlock();
goto err_fault;
}
@@ -494,7 +494,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
}
/* trace/ips should not be dereferenced after this point */
- if (may_fault)
+ if (!trace_in)
rcu_read_unlock();
if (user_build_id)
--
2.34.1
Powered by blists - more mailing lists