[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250320114200.14377-1-jolsa@kernel.org>
Date: Thu, 20 Mar 2025 12:41:35 +0100
From: Jiri Olsa <jolsa@...nel.org>
To: Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrii Nakryiko <andrii@...nel.org>
Cc: Eyal Birger <eyal.birger@...il.com>,
kees@...nel.org,
bpf@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org,
x86@...nel.org,
Song Liu <songliubraving@...com>,
Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
Hao Luo <haoluo@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Alan Maguire <alan.maguire@...cle.com>,
David Laight <David.Laight@...LAB.COM>,
Thomas Weißschuh <thomas@...ch.de>
Subject: [PATCH RFCv3 00/23] uprobes: Add support to optimize usdt probes on x86_64
hi,
this patchset adds support to optimize usdt probes on top of 5-byte
nop instruction.
The generic approach (optimize all uprobes) is hard due to emulating
possible multiple original instructions and its related issues. The
usdt case, which stores 5-byte nop seems much easier, so starting
with that.
The basic idea is to replace breakpoint exception with syscall which
is faster on x86_64. For more details please see changelog of patch 8.
The run_bench_uprobes.sh benchmark triggers uprobe (on top of different
original instructions) in a loop and counts how many of those happened
per second (the unit below is million loops).
There's big speed up if you consider current usdt implementation
(uprobe-nop) compared to proposed usdt (uprobe-nop5):
current:
usermode-count : 152.604 ± 0.044M/s
syscall-count : 13.359 ± 0.042M/s
--> uprobe-nop : 3.229 ± 0.002M/s
uprobe-push : 3.086 ± 0.004M/s
uprobe-ret : 1.114 ± 0.004M/s
uprobe-nop5 : 1.121 ± 0.005M/s
uretprobe-nop : 2.145 ± 0.002M/s
uretprobe-push : 2.070 ± 0.001M/s
uretprobe-ret : 0.931 ± 0.001M/s
uretprobe-nop5 : 0.957 ± 0.001M/s
after the change:
usermode-count : 152.448 ± 0.244M/s
syscall-count : 14.321 ± 0.059M/s
uprobe-nop : 3.148 ± 0.007M/s
uprobe-push : 2.976 ± 0.004M/s
uprobe-ret : 1.068 ± 0.003M/s
--> uprobe-nop5 : 7.038 ± 0.007M/s
uretprobe-nop : 2.109 ± 0.004M/s
uretprobe-push : 2.035 ± 0.001M/s
uretprobe-ret : 0.908 ± 0.001M/s
uretprobe-nop5 : 3.377 ± 0.009M/s
I see bit more speed up on Intel (above) compared to AMD. The big nop5
speed up is partly due to emulating nop5 and partly due to optimization.
The key speed up we do this for is the USDT switch from nop to nop5:
uprobe-nop : 3.148 ± 0.007M/s
uprobe-nop5 : 7.038 ± 0.007M/s
rfc v3 changes:
- I tried to have just single syscall for both entry and return uprobe,
but it turned out to be slower than having two separated syscalls,
probably due to extra save/restore processing we have to do for
argument reg, I see differences like:
2 syscalls: uprobe-nop5 : 7.038 ± 0.007M/s
1 syscall: uprobe-nop5 : 6.943 ± 0.003M/s
- use instructions (nop5/int3/call) to determine the state of the
uprobe update in the process
- removed endbr instruction from uprobe trampoline
- seccomp changes
pending todo (or follow ups):
- shadow stack fails for uprobe session setup, will fix it in next version
- use PROCMAP_QUERY in tests
- alloc 'struct uprobes_state' for mm_struct only when needed [Andrii]
thanks,
jirka
Cc: Eyal Birger <eyal.birger@...il.com>
Cc: kees@...nel.org
---
Jiri Olsa (23):
uprobes: Rename arch_uretprobe_trampoline function
uprobes: Make copy_from_page global
uprobes: Move ref_ctr_offset update out of uprobe_write_opcode
uprobes: Add uprobe_write function
uprobes: Add nbytes argument to uprobe_write_opcode
uprobes: Add orig argument to uprobe_write and uprobe_write_opcode
uprobes: Remove breakpoint in unapply_uprobe under mmap_write_lock
uprobes/x86: Add uprobe syscall to speed up uprobe
uprobes/x86: Add mapping for optimized uprobe trampolines
uprobes/x86: Add support to emulate nop5 instruction
uprobes/x86: Add support to optimize uprobes
selftests/bpf: Use 5-byte nop for x86 usdt probes
selftests/bpf: Reorg the uprobe_syscall test function
selftests/bpf: Rename uprobe_syscall_executed prog to test_uretprobe_multi
selftests/bpf: Add uprobe/usdt syscall tests
selftests/bpf: Add hit/attach/detach race optimized uprobe test
selftests/bpf: Add uprobe syscall sigill signal test
selftests/bpf: Add optimized usdt variant for basic usdt test
selftests/bpf: Add uprobe_regs_equal test
selftests/bpf: Change test_uretprobe_regs_change for uprobe and uretprobe
selftests/bpf: Add 5-byte nop uprobe trigger bench
seccomp: passthrough uprobe systemcall without filtering
selftests/seccomp: validate uprobe syscall passes through seccomp
arch/arm/probes/uprobes/core.c | 2 +-
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/uprobes.h | 7 ++
arch/x86/kernel/uprobes.c | 540 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
include/linux/syscalls.h | 2 +
include/linux/uprobes.h | 19 +++-
kernel/events/uprobes.c | 141 +++++++++++++++++-------
kernel/fork.c | 1 +
kernel/seccomp.c | 32 ++++--
kernel/sys_ni.c | 1 +
tools/testing/selftests/bpf/bench.c | 12 +++
tools/testing/selftests/bpf/benchs/bench_trigger.c | 42 ++++++++
tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh | 2 +-
tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c | 453 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------
tools/testing/selftests/bpf/prog_tests/usdt.c | 38 ++++---
tools/testing/selftests/bpf/progs/uprobe_syscall.c | 4 +-
tools/testing/selftests/bpf/progs/uprobe_syscall_executed.c | 41 ++++++-
tools/testing/selftests/bpf/sdt.h | 9 +-
tools/testing/selftests/bpf/test_kmods/bpf_testmod.c | 11 +-
tools/testing/selftests/seccomp/seccomp_bpf.c | 107 ++++++++++++++----
20 files changed, 1338 insertions(+), 127 deletions(-)
Powered by blists - more mailing lists