[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250905134833.26791-1-contact@arnaud-lcm.com>
Date: Fri, 5 Sep 2025 15:48:33 +0200
From: Arnaud lecomte <contact@...aud-lcm.com>
To: alexei.starovoitov@...il.com,
yonghong.song@...ux.dev,
song@...nel.org
Cc: andrii@...nel.org,
ast@...nel.org,
bpf@...r.kernel.org,
daniel@...earbox.net,
eddyz87@...il.com,
haoluo@...gle.com,
john.fastabend@...il.com,
jolsa@...nel.org,
kpsingh@...nel.org,
linux-kernel@...r.kernel.org,
martin.lau@...ux.dev,
sdf@...ichev.me,
syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com,
Arnaud Lecomte <contact@...aud-lcm.com>
Subject: [PATCH bpf-next v8 3/3] bpf: fix stackmap overflow check in
__bpf_get_stackid()
From: Arnaud Lecomte <contact@...aud-lcm.com>
Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stackid()
when copying stack trace data. The issue occurs when the perf trace
contains more stack entries than the stack map bucket can hold,
leading to an out-of-bounds write in the bucket's data array.
Reported-by: syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c9b724fbb41cf2538b7b
Fixes: ee2a098851bf ("bpf: Adjust BPF stack helper functions to accommodate skip > 0")
Signed-off-by: Arnaud Lecomte <contact@...aud-lcm.com>
Acked-by: Yonghong Song <yonghong.song@...ux.dev>
---
Changes in v2:
- Fixed max_depth names across get stack id
Changes in v4:
- Removed unnecessary empty line in __bpf_get_stackid
Changes in v6:
- Added back trace_len computation in __bpf_get_stackid
Changes in v7:
- Removed usefull trace->nr assignation in bpf_get_stackid_pe
- Added restoration of trace->nr for both kernel and user traces
in bpf_get_stackid_pe
Link to v7: https://lore.kernel.org/all/20250903234325.30212-1-contact@arnaud-lcm.com/
---
kernel/bpf/stackmap.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 9f3ae426ddc3..9b57b8307565 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -369,6 +369,7 @@ BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx,
{
struct perf_event *event = ctx->event;
struct perf_callchain_entry *trace;
+ u32 elem_size, max_depth;
bool kernel, user;
__u64 nr_kernel;
int ret;
@@ -390,15 +391,16 @@ BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx,
return -EFAULT;
nr_kernel = count_kernel_ip(trace);
+ elem_size = stack_map_data_size(map);
+ __u64 nr = trace->nr; /* save original */
if (kernel) {
- __u64 nr = trace->nr;
-
trace->nr = nr_kernel;
+ max_depth =
+ stack_map_calculate_max_depth(map->value_size, elem_size, flags);
+ trace->nr = min_t(u32, nr_kernel, max_depth);
ret = __bpf_get_stackid(map, trace, flags);
- /* restore nr */
- trace->nr = nr;
} else { /* user */
u64 skip = flags & BPF_F_SKIP_FIELD_MASK;
@@ -407,8 +409,15 @@ BPF_CALL_3(bpf_get_stackid_pe, struct bpf_perf_event_data_kern *, ctx,
return -EFAULT;
flags = (flags & ~BPF_F_SKIP_FIELD_MASK) | skip;
+ max_depth =
+ stack_map_calculate_max_depth(map->value_size, elem_size, flags);
+ trace->nr = min_t(u32, trace->nr, max_depth);
ret = __bpf_get_stackid(map, trace, flags);
}
+
+ /* restore nr */
+ trace->nr = nr;
+
return ret;
}
--
2.47.3
Powered by blists - more mailing lists