[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260115150405.443581-1-realwujing@gmail.com>
Date: Thu, 15 Jan 2026 23:04:05 +0800
From: wujing <realwujing@...il.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>
Cc: Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>,
Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>,
Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>,
bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
wujing <realwujing@...il.com>,
Qiliang Yuan <yuanql9@...natelecom.cn>
Subject: [PATCH] bpf/verifier: optimize precision backtracking by skipping precise bits
Backtracking is one of the most expensive parts of the verifier. When
marking precision, currently the verifier always triggers the full
__mark_chain_precision even if the target register or stack slot is
already marked as precise.
Since a precise mark in a state implies that all necessary ancestors
have already been backtracked and marked accordingly, we can safely
skip the backtracking process if the bit is already set.
This patch implements early exit logic in:
1. mark_chain_precision: Check if the register is already precise.
2. propagate_precision: Skip registers and stack slots that are already
precise in the current state when propagating from an old state.
This significantly reduces redundant backtracking in complex BPF
programs with frequent state pruning and merges.
Signed-off-by: wujing <realwujing@...il.com>
Signed-off-by: Qiliang Yuan <yuanql9@...natelecom.cn>
---
kernel/bpf/verifier.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6220dde41107..378341e1177f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4927,6 +4927,14 @@ static int __mark_chain_precision(struct bpf_verifier_env *env,
int mark_chain_precision(struct bpf_verifier_env *env, int regno)
{
+ struct bpf_reg_state *reg;
+
+ if (regno >= 0) {
+ reg = &env->cur_state->frame[env->cur_state->curframe]->regs[regno];
+ if (reg->precise)
+ return 0;
+ }
+
return __mark_chain_precision(env, env->cur_state, regno, NULL);
}
@@ -19527,19 +19535,23 @@ static int propagate_precision(struct bpf_verifier_env *env,
struct bpf_verifier_state *cur,
bool *changed)
{
- struct bpf_reg_state *state_reg;
- struct bpf_func_state *state;
+ struct bpf_reg_state *state_reg, *cur_reg;
+ struct bpf_func_state *state, *cur_state;
int i, err = 0, fr;
bool first;
for (fr = old->curframe; fr >= 0; fr--) {
state = old->frame[fr];
+ cur_state = cur->frame[fr];
state_reg = state->regs;
first = true;
for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
if (state_reg->type != SCALAR_VALUE ||
!state_reg->precise)
continue;
+ cur_reg = &cur_state->regs[i];
+ if (cur_reg->precise)
+ continue;
if (env->log.level & BPF_LOG_LEVEL2) {
if (first)
verbose(env, "frame %d: propagating r%d", fr, i);
@@ -19557,6 +19569,9 @@ static int propagate_precision(struct bpf_verifier_env *env,
if (state_reg->type != SCALAR_VALUE ||
!state_reg->precise)
continue;
+ cur_reg = &cur_state->stack[i].spilled_ptr;
+ if (cur_reg->precise)
+ continue;
if (env->log.level & BPF_LOG_LEVEL2) {
if (first)
verbose(env, "frame %d: propagating fp%d",
--
2.39.5
Powered by blists - more mailing lists