[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250731092433.49367-1-dongml2@chinatelecom.cn>
Date: Thu, 31 Jul 2025 17:24:23 +0800
From: Menglong Dong <menglong8.dong@...il.com>
To: mhiramat@...nel.org,
olsajiri@...il.com
Cc: rostedt@...dmis.org,
mathieu.desnoyers@...icios.com,
hca@...ux.ibm.com,
revest@...omium.org,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org,
bpf@...r.kernel.org
Subject: [PATCH bpf-next v3 0/4] fprobe: use rhashtable for fprobe_ip_table
For now, the budget of the hash table that is used for fprobe_ip_table is
fixed, which is 256, and can cause huge overhead when the hooked functions
is a huge quantity.
In this series, we use rhltable for fprobe_ip_table to reduce the
overhead.
Meanwhile, we also add the benchmark testcase "kprobe-multi-all" and, which
will hook all the kernel functions during the testing. Before this series,
the performance is:
usermode-count : 889.269 ± 0.053M/s
kernel-count : 437.149 ± 0.501M/s
syscall-count : 31.618 ± 0.725M/s
fentry : 135.591 ± 0.129M/s
fexit : 68.127 ± 0.062M/s
fmodret : 71.764 ± 0.098M/s
rawtp : 198.375 ± 0.190M/s
tp : 79.770 ± 0.064M/s
kprobe : 54.590 ± 0.021M/s
kprobe-multi : 57.940 ± 0.044M/s
kprobe-multi-all: 12.151 ± 0.020M/s
kretprobe : 21.945 ± 0.163M/s
kretprobe-multi: 28.199 ± 0.018M/s
kretprobe-multi-all: 9.667 ± 0.008M/s
With this series, the performance is:
usermode-count : 888.863 ± 0.378M/s
kernel-count : 429.339 ± 0.136M/s
syscall-count : 31.215 ± 0.019M/s
fentry : 135.604 ± 0.118M/s
fexit : 68.470 ± 0.074M/s
fmodret : 70.957 ± 0.016M/s
rawtp : 202.650 ± 0.304M/s
tp : 80.428 ± 0.053M/s
kprobe : 55.915 ± 0.074M/s
kprobe-multi : 54.015 ± 0.039M/s
kprobe-multi-all: 46.381 ± 0.024M/s
kretprobe : 22.234 ± 0.050M/s
kretprobe-multi: 27.946 ± 0.016M/s
kretprobe-multi-all: 24.439 ± 0.016M/s
The benchmark of "kprobe-multi-all" increase from 12.151M/s to 46.381M/s.
I don't know why, but the benchmark result for "kprobe-multi-all" is much
better in this version for the legacy case(without this series). In V2,
the benchmark increase from 6.283M/s to 54.487M/s, but it become
12.151M/s to 46.381M/s in this version. Maybe it has some relation with
the compiler optimization :/
The result of this version should be more accurate, which is similar to
Jiri's result: from 3.565 ± 0.047M/s to 11.553 ± 0.458M/s.
The locking is not handled properly in the first patch. In the
fprobe_entry, we should use RCU when we access the rhlist_head. However,
we can't use RCU for __fprobe_handler, as it can sleep. In the origin
logic, it seems that the usage of hlist_for_each_entry_from_rcu() is not
protected by rcu_read_lock neither, isn't it? I don't know how to handle
this part ;(
Changes since V2:
* some format optimization, and handle the error that returned from
rhltable_insert in insert_fprobe_node for the 1st patch
* add "kretprobe-multi-all" testcase to the 4th patch
* attach a empty kprobe-multi prog to the kernel functions, which don't
call incr_count(), to make the result more accurate in the 4th patch
Changes Since V1:
* use rhltable instead of rhashtable to handle the duplicate key.
Menglong Dong (4):
fprobe: use rhltable for fprobe_ip_table
selftests/bpf: move get_ksyms and get_addrs to trace_helpers.c
selftests/bpf: skip recursive functions for kprobe_multi
selftests/bpf: add benchmark testing for kprobe-multi-all
include/linux/fprobe.h | 2 +-
kernel/trace/fprobe.c | 155 +++++++-----
tools/testing/selftests/bpf/bench.c | 4 +
.../selftests/bpf/benchs/bench_trigger.c | 54 ++++
.../selftests/bpf/benchs/run_bench_trigger.sh | 4 +-
.../bpf/prog_tests/kprobe_multi_test.c | 220 +----------------
.../selftests/bpf/progs/trigger_bench.c | 12 +
tools/testing/selftests/bpf/trace_helpers.c | 233 ++++++++++++++++++
tools/testing/selftests/bpf/trace_helpers.h | 3 +
9 files changed, 401 insertions(+), 286 deletions(-)
--
2.50.1
Powered by blists - more mailing lists