[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250903135348.97884-1-contact@arnaud-lcm.com>
Date: Wed, 3 Sep 2025 15:53:48 +0200
From: Arnaud Lecomte <contact@...aud-lcm.com>
To: alexei.starovoitov@...il.com,
yonghong.song@...ux.dev,
song@...nel.org
Cc: andrii@...nel.org,
ast@...nel.org,
bpf@...r.kernel.org,
daniel@...earbox.net,
eddyz87@...il.com,
haoluo@...gle.com,
john.fastabend@...il.com,
jolsa@...nel.org,
kpsingh@...nel.org,
linux-kernel@...r.kernel.org,
martin.lau@...ux.dev,
sdf@...ichev.me,
syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com,
Arnaud Lecomte <contact@...aud-lcm.com>
Subject: [PATCH bpf-next v6 2/2] bpf: fix stackmap overflow check in
__bpf_get_stackid()
Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stackid()
when copying stack trace data. The issue occurs when the perf trace
contains more stack entries than the stack map bucket can hold,
leading to an out-of-bounds write in the bucket's data array.
Changes in v2:
- Fixed max_depth names across get stack id
Changes in v4:
- Removed unnecessary empty line in __bpf_get_stackid
Changs in v6:
- Added back trace_len computation in __bpf_get_stackid
Link to v5: https://lore.kernel.org/all/20250826212229.143230-1-contact@arnaud-lcm.com/
Reported-by: syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c9b724fbb41cf2538b7b
Fixes: ee2a098851bf ("bpf: Adjust BPF stack helper functions to accommodate skip > 0")
Signed-off-by: Arnaud Lecomte <contact@...aud-lcm.com>
Acked-by: Yonghong Song <yonghong.song@...ux.dev>
---
kernel/bpf/stackmap.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 1ebc525b7c2f..8b2dcb8a6dc3 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -251,8 +251,9 @@ static long __bpf_get_stackid(struct bpf_map *map,
{
struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
struct stack_map_bucket *bucket, *new_bucket, *old_bucket;
+ u32 hash, id, trace_nr, trace_len, i, max_depth;
u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
- u32 hash, id, trace_nr, trace_len, i;
+ u32 elem_size = stack_map_data_size(map);
bool user = flags & BPF_F_USER_STACK;
u64 *ips;
bool hash_matches;
@@ -261,8 +262,12 @@ static long __bpf_get_stackid(struct bpf_map *map,
/* skipping more than usable stack trace */
return -EFAULT;
+ max_depth =
+ stack_map_calculate_max_depth(map->value_size, elem_size, flags);
trace_nr = trace->nr - skip;
+ trace_nr = min_t(u32, trace_nr, max_depth - skip);
trace_len = trace_nr * sizeof(u64);
+
ips = trace->ip + skip;
hash = jhash2((u32 *)ips, trace_len / sizeof(u32), 0);
id = hash & (smap->n_buckets - 1);
--
2.47.3
Powered by blists - more mailing lists