[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250114140237.3506624-1-jolsa@kernel.org>
Date: Tue, 14 Jan 2025 15:02:37 +0100
From: Jiri Olsa <jolsa@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
David Laight <David.Laight@...LAB.COM>,
Peter Zijlstra <peterz@...radead.org>
Cc: lkml <linux-kernel@...r.kernel.org>,
linux-trace-kernel@...r.kernel.org,
bpf@...r.kernel.org,
x86@...nel.org
Subject: [RFC] x86/alternatives: Merge first and second step in text_poke_bp_batch
hi,
while checking on similar code for uprobes I was wondering if we
can merge first 2 steps of instruction update in text_poke_bp_batch
function.
Basically the first step now would be to write int3 byte together
with the rest of the bytes of the new instruction instead of doing
that separately. And the second step would be to overwrite int3
byte with first byte of the new instruction.
Would that work or do I miss some x86 detail that could lead to crash?
I tried to hack it together in attached patch and it speeds up a bit
text_poke_bp_batch as shown below.
thoughts? thanks,
jirka
---
# cat ~/test.sh
for i in `seq 1 50`; do echo function > current_tracer ; echo nop > current_tracer ; done
current code:
# perf stat -e cycles:k,instructions:k -a -- ~jolsa/test.sh
Performance counter stats for 'system wide':
158,872,470,494 cycles:k
61,529,351,096 instructions:k # 0.39 insn per cycle
12.847083584 seconds time elapsed
with the change below:
# perf stat -e cycles:k,instructions:k -a -- ~jolsa/test.sh
Performance counter stats for 'system wide':
105,687,644,963 cycles:k
45,981,957,996 instructions:k # 0.44 insn per cycle
10.011825294 seconds time elapsed
---
arch/x86/kernel/alternative.c | 30 +++++++++---------------------
1 file changed, 9 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 243843e44e89..99830e9cd641 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2125,9 +2125,9 @@ struct text_poke_loc {
s32 disp;
u8 len;
u8 opcode;
- const u8 text[POKE_MAX_OPCODE_SIZE];
+ u8 text[POKE_MAX_OPCODE_SIZE];
/* see text_poke_bp_batch() */
- u8 old;
+ u8 first;
};
struct bp_patching_desc {
@@ -2282,7 +2282,6 @@ static int tp_vec_nr;
*/
static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries)
{
- unsigned char int3 = INT3_INSN_OPCODE;
unsigned int i;
int do_sync;
@@ -2313,29 +2312,20 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
*/
smp_wmb();
- /*
- * First step: add a int3 trap to the address that will be patched.
- */
- for (i = 0; i < nr_entries; i++) {
- tp[i].old = *(u8 *)text_poke_addr(&tp[i]);
- text_poke(text_poke_addr(&tp[i]), &int3, INT3_INSN_SIZE);
- }
-
- text_poke_sync();
-
/*
* Second step: update all but the first byte of the patched range.
*/
for (do_sync = 0, i = 0; i < nr_entries; i++) {
- u8 old[POKE_MAX_OPCODE_SIZE+1] = { tp[i].old, };
+ u8 old[POKE_MAX_OPCODE_SIZE+1];
u8 _new[POKE_MAX_OPCODE_SIZE+1];
const u8 *new = tp[i].text;
int len = tp[i].len;
+ tp[i].first = tp[i].text[0];
+ tp[i].text[0] = (u8) INT3_INSN_OPCODE;
+
if (len - INT3_INSN_SIZE > 0) {
- memcpy(old + INT3_INSN_SIZE,
- text_poke_addr(&tp[i]) + INT3_INSN_SIZE,
- len - INT3_INSN_SIZE);
+ memcpy(old, text_poke_addr(&tp[i]), len);
if (len == 6) {
_new[0] = 0x0f;
@@ -2343,9 +2333,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
new = _new;
}
- text_poke(text_poke_addr(&tp[i]) + INT3_INSN_SIZE,
- new + INT3_INSN_SIZE,
- len - INT3_INSN_SIZE);
+ text_poke(text_poke_addr(&tp[i]), new, len);
do_sync++;
}
@@ -2391,7 +2379,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
* replacing opcode.
*/
for (do_sync = 0, i = 0; i < nr_entries; i++) {
- u8 byte = tp[i].text[0];
+ u8 byte = tp[i].first;
if (tp[i].len == 6)
byte = 0x0f;
--
2.47.0
Powered by blists - more mailing lists