[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <ff2e76afdc594cd27a0a08606ffdb0e36ef427dbd5ff0e64d2437a1ad36b96a2@mail.kernel.org>
Date: Wed, 7 Jan 2026 07:11:14 +0000 (UTC)
From: bot+bpf-ci@...nel.org
To: menglong8.dong@...il.com,ast@...nel.org,andrii@...nel.org
Cc: daniel@...earbox.net,martin.lau@...ux.dev,eddyz87@...il.com,song@...nel.org,yonghong.song@...ux.dev,john.fastabend@...il.com,kpsingh@...nel.org,sdf@...ichev.me,haoluo@...gle.com,jolsa@...nel.org,davem@...emloft.net,dsahern@...nel.org,tglx@...utronix.de,mingo@...hat.com,jiang.biao@...ux.dev,bp@...en8.de,dave.hansen@...ux.intel.com,x86@...nel.org,hpa@...or.com,bpf@...r.kernel.org,netdev@...r.kernel.org,linux-kernel@...r.kernel.org,ast@...nel.org,andrii@...nel.org,daniel@...earbox.net,martin.lau@...nel.org,eddyz87@...il.com,yonghong.song@...ux.dev,clm@...a.com,ihor.solodrai@...ux.dev
Subject: Re: [PATCH bpf-next v7 02/11] bpf: use last 8-bits for the nr_args in trampoline
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 6e076485bf70..6b58f9a4dc92 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
[ ... ]
> @@ -1194,7 +1194,7 @@ const struct bpf_func_proto bpf_get_branch_snapshot_proto = {
> BPF_CALL_3(get_func_arg, void *, ctx, u32, n, u64 *, value)
> {
> /* This helper call is inlined by verifier. */
> - u64 nr_args = ((u64 *)ctx)[-1];
> + u64 nr_args = ((u64 *)ctx)[-1] & 0xFF;
[ ... ]
> @@ -1214,7 +1214,7 @@ static const struct bpf_func_proto bpf_get_func_arg_proto = {
> BPF_CALL_2(get_func_ret, void *, ctx, u64 *, value)
> {
> /* This helper call is inlined by verifier. */
> - u64 nr_args = ((u64 *)ctx)[-1];
> + u64 nr_args = ((u64 *)ctx)[-1] & 0xFF;
>
> *value = ((u64 *)ctx)[nr_args];
> return 0;
Both get_func_arg() and get_func_ret() are updated with the 0xFF mask, but
should get_func_arg_cnt() also receive the same treatment?
Current code in get_func_arg_cnt():
BPF_CALL_1(get_func_arg_cnt, void *, ctx)
{
/* This helper call is inlined by verifier. */
return ((u64 *)ctx)[-1];
}
The comment says "inlined by verifier", but the verifier only inlines this
helper for BPF_PROG_TYPE_TRACING programs. For BPF_PROG_TYPE_LSM programs
with BPF_LSM_MAC attachment type, bpf_prog_has_trampoline() returns true,
allowing access to this helper via tracing_prog_func_proto(). However,
do_misc_fixups() only inlines for BPF_PROG_TYPE_TRACING, so LSM programs
would call this non-inlined fallback function.
If subsequent patches store metadata in the upper bits of ctx[-1], could
LSM programs calling bpf_get_func_arg_cnt() receive incorrect values?
Later commits in this series (93d4641b9513 "bpf,x86: add fsession support
for x86_64") do use the upper bits for BPF_TRAMP_M_IS_RETURN and
BPF_TRAMP_M_COOKIE flags. This regression is not fixed in the series.
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20773233136
Powered by blists - more mailing lists