[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260107-skb-meta-safeproof-netdevs-rx-only-v3-15-0d461c5e4764@cloudflare.com>
Date: Wed, 07 Jan 2026 15:28:15 +0100
From: Jakub Sitnicki <jakub@...udflare.com>
To: bpf@...r.kernel.org
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Stanislav Fomichev <sdf@...ichev.me>, Simon Horman <horms@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
kernel-team@...udflare.com
Subject: [PATCH bpf-next v3 15/17] bpf, verifier: Support direct kernel
calls in gen_prologue
Prepare ground for the next patch to emit a call to a regular kernel
function, not a kfunc or a BPF helper, from the prologue generator using
BPF_EMIT_CALL.
These calls use offsets relative to __bpf_call_base and must bypass the
verifier's patch_call_imm fixup, which expects BPF helper IDs rather than
pre-resolved offsets.
Add a finalized_call flag to bpf_insn_aux_data to mark call instructions
with finalized offsets so the verifier can skip patch_call_imm fixup for
these calls.
As a follow-up, existing gen_prologue and gen_epilogue callbacks using
kfuncs can be converted to BPF_EMIT_CALL, removing the need for kfunc
resolution during prologue/epilogue generation.
Suggested-by: Alexei Starovoitov <ast@...nel.org>
Signed-off-by: Jakub Sitnicki <jakub@...udflare.com>
---
include/linux/bpf_verifier.h | 1 +
kernel/bpf/verifier.c | 12 ++++++++++++
net/core/filter.c | 5 +++--
3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b32ddf0f0ab3..9ccd56c04a45 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -561,6 +561,7 @@ struct bpf_insn_aux_data {
bool non_sleepable; /* helper/kfunc may be called from non-sleepable context */
bool is_iter_next; /* bpf_iter_<type>_next() kfunc call */
bool call_with_percpu_alloc_ptr; /* {this,per}_cpu_ptr() with prog percpu alloc */
+ bool finalized_call; /* call holds function offset relative to __bpf_base_call */
u8 alu_state; /* used in combination with alu_limit */
/* true if STX or LDX instruction is a part of a spill/fill
* pattern for a bpf_fastcall call.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 76f2befc8159..219e233cc4c6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -21816,6 +21816,14 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
env->prog = new_prog;
delta += cnt - 1;
+ /* gen_prologue emits function calls with target address
+ * relative to __bpf_call_base. Skip patch_call_imm fixup.
+ */
+ for (i = 0; i < cnt - 1; i++) {
+ if (bpf_helper_call(&env->prog->insnsi[i]))
+ env->insn_aux_data[i].finalized_call = true;
+ }
+
ret = add_kfunc_in_insns(env, insn_buf, cnt - 1);
if (ret < 0)
return ret;
@@ -23422,6 +23430,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
goto next_insn;
}
patch_call_imm:
+ if (env->insn_aux_data[i + delta].finalized_call)
+ goto next_insn;
+
fn = env->ops->get_func_proto(insn->imm, env->prog);
/* all functions that have prototype and verifier allowed
* programs to call them, must be real in-kernel functions
@@ -23433,6 +23444,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
return -EFAULT;
}
insn->imm = fn->func - __bpf_call_base;
+ env->insn_aux_data[i + delta].finalized_call = true;
next_insn:
if (subprogs[cur_subprog + 1].start == i + delta + 1) {
subprogs[cur_subprog].stack_depth += stack_depth_extra;
diff --git a/net/core/filter.c b/net/core/filter.c
index 07af2a94cc9a..e91d5a39e0a7 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -9080,10 +9080,11 @@ static int bpf_unclone_prologue(struct bpf_insn *insn_buf, u32 pkt_access_flags,
*insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 7);
/* ret = bpf_skb_pull_data(skb, 0); */
+ BUILD_BUG_ON(!__same_type(btf_bpf_skb_pull_data,
+ (u64 (*)(struct sk_buff *, u32))NULL));
*insn++ = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
*insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_2, BPF_REG_2);
- *insn++ = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
- BPF_FUNC_skb_pull_data);
+ *insn++ = BPF_EMIT_CALL(bpf_skb_pull_data);
/* if (!ret)
* goto restore;
* return TC_ACT_SHOT;
--
2.43.0
Powered by blists - more mailing lists