lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJNekoXnai0VGOVj8Q3e5RPtTXhNRjdfxF_PxjoQLDYRA@mail.gmail.com>
Date: Mon, 12 May 2025 08:25:20 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Leon Hwang <leon.hwang@...ux.dev>
Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>, Kafai Wan <mannkafai@...il.com>, 
	Song Liu <song@...nel.org>, Jiri Olsa <jolsa@...nel.org>, Alexei Starovoitov <ast@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, 
	Martin KaFai Lau <martin.lau@...ux.dev>, Eduard <eddyz87@...il.com>, 
	Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>, 
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>, 
	Matt Bobrowski <mattbobrowski@...gle.com>, Steven Rostedt <rostedt@...dmis.org>, 
	Masami Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, 
	Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>, LKML <linux-kernel@...r.kernel.org>, 
	bpf <bpf@...r.kernel.org>, 
	linux-trace-kernel <linux-trace-kernel@...r.kernel.org>, 
	Network Development <netdev@...r.kernel.org>, 
	"open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH bpf-next 1/4] bpf: Allow get_func_[arg|arg_cnt] helpers in
 raw tracepoint programs

On Mon, May 12, 2025 at 4:12 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
>
>
>
> On 2025/5/7 05:01, Andrii Nakryiko wrote:
> > On Fri, May 2, 2025 at 7:26 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
> >>
> >>
> >>
> >> On 2025/5/1 00:53, Alexei Starovoitov wrote:
> >>> On Wed, Apr 30, 2025 at 8:55 AM Leon Hwang <leon.hwang@...ux.dev> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On 2025/4/30 20:43, Kafai Wan wrote:
> >>>>> On Wed, Apr 30, 2025 at 10:46 AM Alexei Starovoitov
> >>>>> <alexei.starovoitov@...il.com> wrote:
> >>>>>>
> >>>>>> On Sat, Apr 26, 2025 at 9:00 AM KaFai Wan <mannkafai@...il.com> wrote:
> >>>>>>>
> >>>>
> >>
> >> [...]
> >>
> >>>>
> >>>>
> >>>> bpf_get_func_arg() will be very helpful for bpfsnoop[1] when tracing tp_btf.
> >>>>
> >>>> In bpfsnoop, it can generate a small snippet of bpf instructions to use
> >>>> bpf_get_func_arg() for retrieving and filtering arguments. For example,
> >>>> with the netif_receive_skb tracepoint, bpfsnoop can use
> >>>> bpf_get_func_arg() to filter the skb argument using pcap-filter(7)[2] or
> >>>> a custom attribute-based filter. This will allow bpfsnoop to trace
> >>>> multiple tracepoints using a single bpf program code.
> >>>
> >>> I doubt you thought it through end to end.
> >>> When tracepoint prog attaches we have this check:
> >>>         /*
> >>>          * check that program doesn't access arguments beyond what's
> >>>          * available in this tracepoint
> >>>          */
> >>>         if (prog->aux->max_ctx_offset > btp->num_args * sizeof(u64))
> >>>                 return -EINVAL;
> >>>
> >>> So you cannot have a single bpf prog attached to many tracepoints
> >>> to read many arguments as-is.
> >>> You can hack around that limit with probe_read,
> >>> but the values won't be trusted and you won't be able to pass
> >>> such untrusted pointers into skb and other helpers/kfuncs.
> >>
> >> I understand that a single bpf program cannot be attached to multiple
> >> tracepoints using tp_btf. However, the same bpf code can be reused to
> >> create multiple bpf programs, each attached to a different tracepoint.
> >>
> >> For example:
> >>
> >> SEC("fentry")
> >> int BPF_PROG(fentry_fn)
> >> {
> >>         /* ... */
> >>         return BPF_OK;
> >> }
> >>
> >> The above fentry code can be compiled into multiple bpf programs to
> >> trace different kernel functions. Each program can then use the
> >> bpf_get_func_arg() helper to access the arguments of the traced function.
> >>
> >> With this patch, tp_btf will gain similar flexibility. For example:
> >>
> >> SEC("tp_btf")
> >> int BPF_PROG(tp_btf_fn)
> >> {
> >>         /* ... */
> >>         return BPF_OK;
> >> }
> >>
> >> Here, bpf_get_func_arg() can be used to access tracepoint arguments.
> >>
> >> Currently, due to the lack of bpf_get_func_arg() support in tp_btf,
> >> bpfsnoop[1] uses bpf_probe_read_kernel() to read tracepoint arguments.
> >> This is also used when filtering specific argument attributes.
> >>
> >> For instance, to filter the skb argument of the netif_receive_skb
> >> tracepoint by 'skb->dev->ifindex == 2', the translated bpf instructions
> >> with bpf_probe_read_kernel() would look like this:
> >>
> >> bool filter_arg(__u64 * args):
> >> ; filter_arg(__u64 *args)
> >>  209: (79) r1 = *(u64 *)(r1 +0) /* all tracepoint's argument has been
> >> read into args using bpf_probe_read_kernel() */
> >>  210: (bf) r3 = r1
> >>  211: (07) r3 += 16
> >>  212: (b7) r2 = 8
> >>  213: (bf) r1 = r10
> >>  214: (07) r1 += -8
> >>  215: (85) call bpf_probe_read_kernel#-125280
> >>  216: (79) r3 = *(u64 *)(r10 -8)
> >>  217: (15) if r3 == 0x0 goto pc+10
> >>  218: (07) r3 += 224
> >>  219: (b7) r2 = 8
> >>  220: (bf) r1 = r10
> >>  221: (07) r1 += -8
> >>  222: (85) call bpf_probe_read_kernel#-125280
> >>  223: (79) r3 = *(u64 *)(r10 -8)
> >>  224: (67) r3 <<= 32
> >>  225: (77) r3 >>= 32
> >>  226: (b7) r0 = 1
> >>  227: (15) if r3 == 0x2 goto pc+1
> >>  228: (af) r0 ^= r0
> >>  229: (95) exit
> >>
> >> If bpf_get_func_arg() is supported in tp_btf, the bpf program will
> >> instead look like:
> >>
> >> static __noinline bool
> >> filter_skb(void *ctx)
> >> {
> >>     struct sk_buff *skb;
> >>
> >>     (void) bpf_get_func_arg(ctx, 0, (__u64 *) &skb);
> >>     return skb->dev->ifindex == 2;
> >> }
> >>
> >> This will simplify the generated code and eliminate the need for
> >> bpf_probe_read_kernel() calls. However, in my tests (on kernel
> >> 6.8.0-35-generic, Ubuntu 24.04 LTS), the pointer returned by
> >> bpf_get_func_arg() is marked as a scalar rather than a trusted pointer:
> >>
> >>         0: R1=ctx() R10=fp0
> >>         ; if (!filter_skb(ctx))
> >>         0: (85) call pc+3
> >>         caller:
> >>          R10=fp0
> >>         callee:
> >>          frame1: R1=ctx() R10=fp0
> >>         4: frame1: R1=ctx() R10=fp0
> >>         ; filter_skb(void *ctx)
> >>         4: (bf) r3 = r10                      ; frame1: R3_w=fp0 R10=fp0
> >>         ;
> >>         5: (07) r3 += -8                      ; frame1: R3_w=fp-8
> >>         ; (void) bpf_get_func_arg(ctx, 0, (__u64 *) &skb);
> >>         6: (b7) r2 = 0                        ; frame1: R2_w=0
> >>         7: (85) call bpf_get_func_arg#183     ; frame1: R0_w=scalar()
> >>         ; return skb->dev->ifindex == 2;
> >>         8: (79) r1 = *(u64 *)(r10 -8)         ; frame1: R1_w=scalar() R10=fp0
> >> fp-8=mmmmmmmm
> >>         ; return skb->dev->ifindex == 2;
> >>         9: (79) r1 = *(u64 *)(r1 +16)
> >>         R1 invalid mem access 'scalar'
> >>         processed 7 insns (limit 1000000) max_states_per_insn 0 total_states 0
> >> peak_states 0 mark_read 0
> >>
> >> If the returned skb is a trusted pointer, the verifier will accept
> >> something like:
> >>
> >> static __noinline bool
> >> filter_skb(struct sk_buff *skb)
> >> {
> >>     return skb->dev->ifindex == 2;
> >> }
> >>
> >> Which will compile into much simpler and more efficient instructions:
> >>
> >> bool filter_skb(struct sk_buff * skb):
> >> ; return skb->dev->ifindex == 2;
> >>   92: (79) r1 = *(u64 *)(r1 +16)
> >> ; return skb->dev->ifindex == 2;
> >>   93: (61) r1 = *(u32 *)(r1 +224)
> >>   94: (b7) r0 = 1
> >> ; return skb->dev->ifindex == 2;
> >>   95: (15) if r1 == 0x2 goto pc+1
> >>   96: (b7) r0 = 0
> >> ; return skb->dev->ifindex == 2;
> >>   97: (95) exit
> >>
> >> In conclusion:
> >>
> >> 1. It will be better if the pointer returned by bpf_get_func_arg() is
> >> trusted, only when the argument index is a known constant.
> >
> > bpf_get_func_arg() was never meant to return trusted arguments, so
> > this, IMO, is pushing it too far.
> >
> >> 2. Adding bpf_get_func_arg() support to tp_btf will significantly
> >> simplify and improve tools like bpfsnoop.
> >
> > "Significantly simplify and improve" is a bit of an exaggeration,
> > given BPF cookies can be used for getting number of arguments of
> > tp_btf, as for the getting rid of bpf_probe_read_kernel(), tbh, more
> > generally useful addition would be an untyped counterpart to
> > bpf_core_cast(), which wouldn't need BTF type information, but will
> > treat all accessed memory as raw bytes (but will still install
> > exception handler just like with bpf_core_cast()).
> >
>
> Cool! The bpf_rdonly_cast() kfunc used by the bpf_core_cast() macro
> works well in bpfsnoop.
>
> The expression 'skb->dev->ifindex == 2' is translated into:
>
> bool filter_arg(__u64 * args):
> ; filter_arg(__u64 *args)
>  209: (bf) r9 = r1
>  210: (79) r8 = *(u64 *)(r9 +0)
>  211: (bf) r1 = r8
>  212: (b7) r2 = 6973
>  213: (bf) r0 = r1
>  214: (79) r1 = *(u64 *)(r0 +16)
>  215: (15) if r1 == 0x0 goto pc+12
>  216: (07) r1 += 224
>  217: (bf) r3 = r1
>  218: (b7) r2 = 8
>  219: (bf) r1 = r10
>  220: (07) r1 += -8
>  221: (85) call bpf_probe_read_kernel#-125280
>  222: (79) r8 = *(u64 *)(r10 -8)
>  223: (67) r8 <<= 32
>  224: (77) r8 >>= 32
>  225: (55) if r8 != 0x2 goto pc+2
>  226: (b7) r8 = 1
>  227: (05) goto pc+1
>  228: (af) r8 ^= r8
>  229: (bf) r0 = r8
>  230: (95) exit
>
> However, since bpf_rdonly_cast() is a kfunc, it causes registers r1–r5
> to be considered volatile.

It is not.
See:
BTF_ID_FLAGS(func, bpf_rdonly_cast, KF_FASTCALL)
and relevant commits.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ