[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260115144946.439069-1-realwujing@gmail.com>
Date: Thu, 15 Jan 2026 22:49:46 +0800
From: wujing <realwujing@...il.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>
Cc: Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>,
Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>,
Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>,
bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
wujing <realwujing@...il.com>,
Qiliang Yuan <yuanql9@...natelecom.cn>
Subject: [PATCH] bpf/verifier: optimize ID mapping reset in states_equal
The verifier uses an ID mapping table (struct bpf_idmap) during state
equivalence checks. Currently, reset_idmap_scratch performs a full memset
on the entire map in every call.
The table size is exactly 4800 bytes (approx. 4.7KB), calculated as:
- MAX_BPF_REG = 11
- MAX_BPF_STACK = 512
- BPF_REG_SIZE = 8
- MAX_CALL_FRAMES = 8
- BPF_ID_MAP_SIZE = (11 + 512 / 8) * 8 = 600 entries
- Each entry (struct bpf_id_pair) is 8 bytes (two u32 fields)
- Total size = 600 * 8 = 4800 bytes
For complex programs with many pruning points, this constant large memset
introduces significant CPU overhead and cache pressure, especially when
only a few IDs are actually used.
This patch optimizes the reset logic by:
1. Adding 'map_cnt' to bpf_idmap to track used slots.
2. Updating 'map_cnt' in check_ids to record the high-water mark.
3. Making reset_idmap_scratch perform a partial memset based on 'map_cnt'.
This improves pruning performance and reduces redundant memory writes.
Signed-off-by: wujing <realwujing@...il.com>
Signed-off-by: Qiliang Yuan <yuanql9@...natelecom.cn>
---
include/linux/bpf_verifier.h | 1 +
kernel/bpf/verifier.c | 10 ++++++++--
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 130bcbd66f60..562f7e63be29 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -692,6 +692,7 @@ struct bpf_id_pair {
struct bpf_idmap {
u32 tmp_id_gen;
+ u32 map_cnt;
struct bpf_id_pair map[BPF_ID_MAP_SIZE];
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 37ce3990c9ad..6220dde41107 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18954,6 +18954,7 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
/* Reached an empty slot; haven't seen this id before */
map[i].old = old_id;
map[i].cur = cur_id;
+ idmap->map_cnt = i + 1;
return true;
}
if (map[i].old == old_id)
@@ -19471,8 +19472,13 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
static void reset_idmap_scratch(struct bpf_verifier_env *env)
{
- env->idmap_scratch.tmp_id_gen = env->id_gen;
- memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
+ struct bpf_idmap *idmap = &env->idmap_scratch;
+
+ idmap->tmp_id_gen = env->id_gen;
+ if (idmap->map_cnt) {
+ memset(idmap->map, 0, idmap->map_cnt * sizeof(struct bpf_id_pair));
+ idmap->map_cnt = 0;
+ }
}
static bool states_equal(struct bpf_verifier_env *env,
--
2.39.5
Powered by blists - more mailing lists