lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251112133546.4246533f@pumpkin>
Date: Wed, 12 Nov 2025 13:35:46 +0000
From: David Laight <david.laight.linux@...il.com>
To: Brahmajit Das <listout@...tout.xyz>
Cc: syzbot+d1b7fa1092def3628bd7@...kaller.appspotmail.com,
 andrii@...nel.org, ast@...nel.org, bpf@...r.kernel.org,
 contact@...aud-lcm.com, daniel@...earbox.net, eddyz87@...il.com,
 haoluo@...gle.com, john.fastabend@...il.com, jolsa@...nel.org,
 kpsingh@...nel.org, linux-kernel@...r.kernel.org, martin.lau@...ux.dev,
 netdev@...r.kernel.org, sdf@...ichev.me, song@...nel.org,
 syzkaller-bugs@...glegroups.com, yonghong.song@...ux.dev
Subject: Re: [PATCH bpf-next v3] bpf: Clamp trace length in __bpf_get_stack
 to fix OOB write

On Tue, 11 Nov 2025 13:42:54 +0530
Brahmajit Das <listout@...tout.xyz> wrote:

> syzbot reported a stack-out-of-bounds write in __bpf_get_stack()
> triggered via bpf_get_stack() when capturing a kernel stack trace.
> 
> After the recent refactor that introduced stack_map_calculate_max_depth(),
> the code in stack_map_get_build_id_offset() (and related helpers) stopped
> clamping the number of trace entries (`trace_nr`) to the number of elements
> that fit into the stack map value (`num_elem`).
> 
> As a result, if the captured stack contained more frames than the map value
> can hold, the subsequent memcpy() would write past the end of the buffer,
> triggering a KASAN report like:
> 
>     BUG: KASAN: stack-out-of-bounds in __bpf_get_stack+0x...
>     Write of size N at addr ... by task syz-executor...
> 
> Restore the missing clamp by limiting `trace_nr` to `num_elem` before
> computing the copy length. This mirrors the pre-refactor logic and ensures
> we never copy more bytes than the destination buffer can hold.
> 
> No functional change intended beyond reintroducing the missing bound check.
> 
> Reported-by: syzbot+d1b7fa1092def3628bd7@...kaller.appspotmail.com
> Fixes: e17d62fedd10 ("bpf: Refactor stack map trace depth calculation into helper function")
> Signed-off-by: Brahmajit Das <listout@...tout.xyz>
> ---
> Changes in v3:
> Revert back to num_elem based logic for setting trace_nr. This was
> suggested by bpf-ci bot, mainly pointing out the chances of underflow
> when  max_depth < skip.
> 
> Quoting the bot's reply:
> The stack_map_calculate_max_depth() function can return a value less than
> skip when sysctl_perf_event_max_stack is lowered below the skip value:
> 
>     max_depth = size / elem_size;
>     max_depth += skip;
>     if (max_depth > curr_sysctl_max_stack)
>         return curr_sysctl_max_stack;
> 
> If sysctl_perf_event_max_stack = 10 and skip = 20, this returns 10.
> 
> Then max_depth - skip = 10 - 20 underflows to 4294967286 (u32 wraps),
> causing min_t() to not limit trace_nr at all. This means the original OOB
> write is not fixed in cases where skip > max_depth.
> 
> With the default sysctl_perf_event_max_stack = 127 and skip up to 255, this
> scenario is reachable even without admin changing sysctls.
> 
> Changes in v2:
> - Use max_depth instead of num_elem logic, this logic is similar to what
> we are already using __bpf_get_stackid
> Link: https://lore.kernel.org/all/20251111003721.7629-1-listout@listout.xyz/
> 
> Changes in v1:
> - RFC patch that restores the number of trace entries by setting
> trace_nr to trace_nr or num_elem based on whichever is the smallest.
> Link: https://lore.kernel.org/all/20251110211640.963-1-listout@listout.xyz/
> ---
>  kernel/bpf/stackmap.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 2365541c81dd..cef79d9517ab 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -426,7 +426,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>  			    struct perf_callchain_entry *trace_in,
>  			    void *buf, u32 size, u64 flags, bool may_fault)
>  {
> -	u32 trace_nr, copy_len, elem_size, max_depth;
> +	u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
>  	bool user_build_id = flags & BPF_F_USER_BUILD_ID;
>  	bool crosstask = task && task != current;
>  	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
> @@ -480,6 +480,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>  	}
>  
>  	trace_nr = trace->nr - skip;
> +	num_elem = size / elem_size;
> +	trace_nr = min_t(u32, trace_nr, num_elem);

Please can we have no unnecessary min_t().
You wouldn't write:
	x = (u32)a < (u32)b ? (u32)a : (u32)b;

    David
 
>  	copy_len = trace_nr * elem_size;
>  
>  	ips = trace->ip + skip;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ