[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <090dcc3c-f9a6-4ff8-873f-e0725329a94d@arnaud-lcm.com>
Date: Fri, 5 Sep 2025 00:53:46 +0200
From: "Lecomte, Arnaud" <contact@...aud-lcm.com>
To: Song Liu <song@...nel.org>, alexei.starovoitov@...il.com,
yonghong.song@...ux.dev
Cc: andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org,
daniel@...earbox.net, eddyz87@...il.com, haoluo@...gle.com,
john.fastabend@...il.com, jolsa@...nel.org, kpsingh@...nel.org,
linux-kernel@...r.kernel.org, martin.lau@...ux.dev, sdf@...ichev.me,
syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com
Subject: Re: [PATCH bpf-next v7 3/3] bpf: fix stackmap overflow check in
__bpf_get_stackid()
On 05/09/2025 00:45, Song Liu wrote:
>
> On 9/3/25 4:43 PM, Arnaud Lecomte wrote:
>> Syzkaller reported a KASAN slab-out-of-bounds write in
>> __bpf_get_stackid()
>> when copying stack trace data. The issue occurs when the perf trace
>> contains more stack entries than the stack map bucket can hold,
>> leading to an out-of-bounds write in the bucket's data array.
>>
>> Changes in v2:
>> - Fixed max_depth names across get stack id
>>
>> Changes in v4:
>> - Removed unnecessary empty line in __bpf_get_stackid
>>
>> Changs in v6:
>> - Added back trace_len computation in __bpf_get_stackid
>>
>> Link to v6:
>> https://lore.kernel.org/all/20250903135348.97884-1-contact@arnaud-lcm.com/
>>
>> Reported-by: syzbot+c9b724fbb41cf2538b7b@...kaller.appspotmail.com
>> Closes: https://syzkaller.appspot.com/bug?extid=c9b724fbb41cf2538b7b
>> Fixes: ee2a098851bf ("bpf: Adjust BPF stack helper functions to
>> accommodate skip > 0")
>> Signed-off-by: Arnaud Lecomte <contact@...aud-lcm.com>
>> Acked-by: Yonghong Song <yonghong.song@...ux.dev>
>
> For future patches, please keep the "Changes in vX.." at the end of
Good to know, thanks !
>
> your commit log and after a "---". IOW, something like
>
>
> Acked-by: Yonghong Song <yonghong.song@...ux.dev>
>
> ---
>
> changes in v2:
>
> ...
>
> ---
>
> kernel/bpf/stackmap.c | 8 ++++++++
>
>
> In this way, the "changes in vXX" part will be removed by git-am.
>
>> ---
>> kernel/bpf/stackmap.c | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
>> index 9f3ae426ddc3..29e05c9ff1bd 100644
>> --- a/kernel/bpf/stackmap.c
>> +++ b/kernel/bpf/stackmap.c
>> @@ -369,6 +369,7 @@ BPF_CALL_3(bpf_get_stackid_pe, struct
>> bpf_perf_event_data_kern *, ctx,
>> {
>> struct perf_event *event = ctx->event;
>> struct perf_callchain_entry *trace;
>> + u32 elem_size, max_depth;
>> bool kernel, user;
>> __u64 nr_kernel;
>> int ret;
>> @@ -390,11 +391,15 @@ BPF_CALL_3(bpf_get_stackid_pe, struct
>> bpf_perf_event_data_kern *, ctx,
>> return -EFAULT;
>> nr_kernel = count_kernel_ip(trace);
>> + elem_size = stack_map_data_size(map);
>> if (kernel) {
>> __u64 nr = trace->nr;
>> trace->nr = nr_kernel;
>
> this trace->nr = is useless.
>
>> + max_depth =
>> + stack_map_calculate_max_depth(map->value_size,
>> elem_size, flags);
>> + trace->nr = min_t(u32, nr_kernel, max_depth);
>> ret = __bpf_get_stackid(map, trace, flags);
>> /* restore nr */
>> @@ -407,6 +412,9 @@ BPF_CALL_3(bpf_get_stackid_pe, struct
>> bpf_perf_event_data_kern *, ctx,
>> return -EFAULT;
>> flags = (flags & ~BPF_F_SKIP_FIELD_MASK) | skip;
>> + max_depth =
>> + stack_map_calculate_max_depth(map->value_size,
>> elem_size, flags);
>> + trace->nr = min_t(u32, trace->nr, max_depth);
>> ret = __bpf_get_stackid(map, trace, flags);
>
> I missed this part earlier. Here we need to restore trace->nr, just
> like we did
>
> in the "if (kernel)" branch.
>
Make sense, thanks !
> Thanks,
>
> Song
>
>> }
>> return ret;
>
Thanks,
Arnaud
Powered by blists - more mailing lists