[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzbG=GMh0-1tT_2gdMtc-ZuV3X7hgoJZpt1RLCYgPMM3oQ@mail.gmail.com>
Date: Fri, 16 Jan 2026 16:06:06 -0800
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Ihor Solodrai <ihor.solodrai@...ux.dev>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>, Mykyta Yatsenko <yatsenko@...a.com>, Tejun Heo <tj@...nel.org>,
Alan Maguire <alan.maguire@...cle.com>, Benjamin Tissoires <bentiss@...nel.org>,
Jiri Kosina <jikos@...nel.org>, Amery Hung <ameryhung@...il.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-input@...r.kernel.org,
sched-ext@...ts.linux.dev
Subject: Re: [PATCH bpf-next v2 05/13] resolve_btfids: Support for KF_IMPLICIT_ARGS
On Fri, Jan 16, 2026 at 12:17 PM Ihor Solodrai <ihor.solodrai@...ux.dev> wrote:
>
> Implement BTF modifications in resolve_btfids to support BPF kernel
> functions with implicit arguments.
>
> For a kfunc marked with KF_IMPLICIT_ARGS flag, a new function
> prototype is added to BTF that does not have implicit arguments. The
> kfunc's prototype is then updated to a new one in BTF. This prototype
> is the intended interface for the BPF programs.
>
> A <func_name>_impl function is added to BTF to make the original kfunc
> prototype searchable for the BPF verifier. If a <func_name>_impl
> function already exists in BTF, its interpreted as a legacy case, and
> this step is skipped.
>
> Whether an argument is implicit is determined by its type:
> currently only `struct bpf_prog_aux *` is supported.
>
> As a result, the BTF associated with kfunc is changed from
>
> __bpf_kfunc bpf_foo(int arg1, struct bpf_prog_aux *aux);
>
> into
>
> bpf_foo_impl(int arg1, struct bpf_prog_aux *aux);
> __bpf_kfunc bpf_foo(int arg1);
>
> For more context see previous discussions and patches [1][2].
>
> [1] https://lore.kernel.org/dwarves/ba1650aa-fafd-49a8-bea4-bdddee7c38c9@linux.dev/
> [2] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
>
> Signed-off-by: Ihor Solodrai <ihor.solodrai@...ux.dev>
> ---
> tools/bpf/resolve_btfids/main.c | 383 ++++++++++++++++++++++++++++++++
> 1 file changed, 383 insertions(+)
>
[...]
> +static int collect_decl_tags(struct btf2btf_context *ctx)
> +{
> + const u32 type_cnt = btf__type_cnt(ctx->btf);
> + struct btf *btf = ctx->btf;
> + const struct btf_type *t;
> + u32 *tags, *tmp;
> + u32 nr_tags = 0;
> +
> + tags = malloc(type_cnt * sizeof(u32));
waste of memory, really, see below
> + if (!tags)
> + return -ENOMEM;
> +
> + for (u32 id = 1; id < type_cnt; id++) {
> + t = btf__type_by_id(btf, id);
> + if (!btf_is_decl_tag(t))
> + continue;
> + tags[nr_tags++] = id;
> + }
> +
> + if (nr_tags == 0) {
> + ctx->decl_tags = NULL;
> + free(tags);
> + return 0;
> + }
> +
> + tmp = realloc(tags, nr_tags * sizeof(u32));
> + if (!tmp) {
> + free(tags);
> + return -ENOMEM;
> + }
This is an interesting realloc() usage pattern, it's quite
unconventional to preallocate too much memory, and then shrink (in C
world)
check libbpf's libbpf_add_mem(), that's a generic "primitive" inside
the libbpf. Do not reuse it as is, but it should give you an idea of a
common pattern: you start with NULL (empty data), when you need to add
a new element, you calculate a new array size which normally would be
some minimal value (to avoid going through 1 -> 2 -> 4 -> 8, many
small and wasteful steps; normally we just jump straight to 16 or so)
or some factor of previous size (doesn't have to be 2x,
libbpf_add_mem() expands by 25%, for instance).
This is a super common approach in C. Please utilize it here as well.
> +
> + ctx->decl_tags = tmp;
> + ctx->nr_decl_tags = nr_tags;
> +
> + return 0;
> +}
> +
> +/*
> + * To find the kfunc flags having its struct btf_id (with ELF addresses)
> + * we need to find the address that is in range of a set8.
> + * If a set8 is found, then the flags are located at addr + 4 bytes.
> + * Return 0 (no flags!) if not found.
> + */
> +static u32 find_kfunc_flags(struct object *obj, struct btf_id *kfunc_id)
> +{
> + const u32 *elf_data_ptr = obj->efile.idlist->d_buf;
> + u64 set_lower_addr, set_upper_addr, addr;
> + struct btf_id *set_id;
> + struct rb_node *next;
> + u32 flags;
> + u64 idx;
> +
> + next = rb_first(&obj->sets);
> + while (next) {
for(next = rb_first(...); next; next = rb_next(next)) seems like a
good fit here, no?
> + set_id = rb_entry(next, struct btf_id, rb_node);
> + if (set_id->kind != BTF_ID_KIND_SET8 || set_id->addr_cnt != 1)
> + goto skip;
> +
> + set_lower_addr = set_id->addr[0];
> + set_upper_addr = set_lower_addr + set_id->cnt * sizeof(u64);
> +
> + for (u32 i = 0; i < kfunc_id->addr_cnt; i++) {
> + addr = kfunc_id->addr[i];
> + /*
> + * Lower bound is exclusive to skip the 8-byte header of the set.
> + * Upper bound is inclusive to capture the last entry at offset 8*cnt.
> + */
> + if (set_lower_addr < addr && addr <= set_upper_addr) {
> + pr_debug("found kfunc %s in BTF_ID_FLAGS %s\n",
> + kfunc_id->name, set_id->name);
> + goto found;
why goto, just do what needs to be done and return?
> + }
> + }
> +skip:
> + next = rb_next(next);
> + }
> +
> + return 0;
> +
> +found:
> + idx = addr - obj->efile.idlist_addr;
> + idx = idx / sizeof(u32) + 1;
> + flags = elf_data_ptr[idx];
> +
> + return flags;
> +}
> +
> +static s64 collect_kfuncs(struct object *obj, struct btf2btf_context *ctx)
> +{
> + struct kfunc *kfunc, *kfuncs, *tmp;
> + const char *tag_name, *func_name;
> + struct btf *btf = ctx->btf;
> + const struct btf_type *t;
> + u32 flags, func_id;
> + struct btf_id *id;
> + s64 nr_kfuncs = 0;
> +
> + if (ctx->nr_decl_tags == 0)
> + return 0;
> +
> + kfuncs = malloc(ctx->nr_decl_tags * sizeof(*kfuncs));
ditto about realloc() usage pattern
> + if (!kfuncs)
> + return -ENOMEM;
> +
[...]
> +/*
> + * For a kfunc with KF_IMPLICIT_ARGS we do the following:
> + * 1. Add a new function with _impl suffix in the name, with the prototype
> + * of the original kfunc.
> + * 2. Add all decl tags except "bpf_kfunc" for the _impl func.
> + * 3. Add a new function prototype with modified list of arguments:
> + * omitting implicit args.
> + * 4. Change the prototype of the original kfunc to the new one.
> + *
> + * This way we transform the BTF associated with the kfunc from
> + * __bpf_kfunc bpf_foo(int arg1, void *implicit_arg);
> + * into
> + * bpf_foo_impl(int arg1, void *implicit_arg);
> + * __bpf_kfunc bpf_foo(int arg1);
> + *
> + * If a kfunc with KF_IMPLICIT_ARGS already has an _impl counterpart
> + * in BTF, then it's a legacy case: an _impl function is declared in the
> + * source code. In this case, we can skip adding an _impl function, but we
> + * still have to add a func prototype that omits implicit args.
> + */
> +static int process_kfunc_with_implicit_args(struct btf2btf_context *ctx, struct kfunc *kfunc)
> +{
this logic looks good
> + s32 idx, new_proto_id, new_func_id, proto_id;
> + const char *param_name, *tag_name;
> + const struct btf_param *params;
> + enum btf_func_linkage linkage;
> + char tmp_name[KSYM_NAME_LEN];
> + struct btf *btf = ctx->btf;
> + int err, len, nr_params;
> + struct btf_type *t;
> +
[...]
Powered by blists - more mailing lists