lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bzbn=RVhMOR7RapYwi+s8gbVS=1msOuZ7MhPvgz8zHiE9w@mail.gmail.com>
Date: Thu, 12 Jun 2025 14:29:12 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Tao Chen <chen.dylane@...ux.dev>
Cc: kpsingh@...nel.org, mattbobrowski@...gle.com, ast@...nel.org, 
	daniel@...earbox.net, andrii@...nel.org, martin.lau@...ux.dev, 
	eddyz87@...il.com, song@...nel.org, yonghong.song@...ux.dev, 
	john.fastabend@...il.com, sdf@...ichev.me, haoluo@...gle.com, 
	jolsa@...nel.org, rostedt@...dmis.org, mhiramat@...nel.org, 
	mathieu.desnoyers@...icios.com, bpf@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next] bpf: clear user buf when bpf_d_path failed

On Wed, Jun 11, 2025 at 8:49 AM Tao Chen <chen.dylane@...ux.dev> wrote:
>
> The bpf_d_path() function may fail. If it does,
> clear the user buf, like bpf_probe_read etc.
>

But that doesn't mean we *have to* do memset(0) for bpf_d_path(),
though. Especially given that path buffer can be pretty large (4KB).

Is there an issue you are trying to address with this, or is it more
of a consistency clean up? Note, that more or less recently we made
this zero filling behavior an option with an extra flag
(BPF_F_PAD_ZEROS) for newer APIs. And if anything, bpf_d_path() is
more akin to variable-sized string probing APIs rather than
fixed-sized bpf_probe_read* family.

In short, I feel like we should revert this and let users do
zero-filling, if they really need to. bpf_probe_read_kernel(dst, sz,
NULL) would do. But we should think about adding dynptr-based
bpf_dynptr_memset() API for cases when the size is not known
statically, IMO.


> Signed-off-by: Tao Chen <chen.dylane@...ux.dev>
> ---
>  kernel/trace/bpf_trace.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 0998cbbb963..bb1003cb271 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -916,11 +916,14 @@ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz)
>          * potentially broken verifier.
>          */
>         len = copy_from_kernel_nofault(&copy, path, sizeof(*path));
> -       if (len < 0)
> +       if (len < 0) {
> +               memset(buf, 0, sz);
>                 return len;
> +       }
>
>         p = d_path(&copy, buf, sz);
>         if (IS_ERR(p)) {
> +               memset(buf, 0, sz);
>                 len = PTR_ERR(p);
>         } else {
>                 len = buf + sz - p;
> --
> 2.48.1
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ