[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170327074348.567d4043935269b817ae6bf0@kernel.org>
Date: Mon, 27 Mar 2017 07:43:48 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
"H . Peter Anvin" <hpa@...or.com>,
Ananth N Mavinakayanahalli <ananth@...ux.vnet.ibm.com>,
Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
"David S . Miller" <davem@...emloft.net>,
Andrey Ryabinin <aryabinin@...tuozzo.com>
Subject: Re: [RFC PATCH tip/master 8/8] kprobes/x86: Consolidate insn
decoder users for copying code
On Sun, 26 Mar 2017 12:33:01 +0900
Masami Hiramatsu <mhiramat@...nel.org> wrote:
> Consolidate x86 instruction decoder users on the path of
> copying original code for kprobes.
>
> Kprobes decodes same instruction 3 times in maximum when
> preparing instruction buffer. The first time for getting the
> length of instruction, the 2nd for adjusting displacement,
> and the 3rd for checking whether the instruction is boostable
> or not. For each time, actually decoding target address is
> slightly different (1st is original address or recovered
> instruction buffer, 2nd and 3rd are copied buffer), but
> basically those must have same instruction.
> Thus, this patch also changes the target address to copied
> buffer at first and reuses the decoded "insn" for displacement
> adjusting and checking boostable.
>
> Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> ---
> arch/x86/kernel/kprobes/common.h | 4 +-
> arch/x86/kernel/kprobes/core.c | 62 +++++++++++++++++---------------------
> arch/x86/kernel/kprobes/opt.c | 5 ++-
> 3 files changed, 33 insertions(+), 38 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
> index d688826..db2182d 100644
> --- a/arch/x86/kernel/kprobes/common.h
> +++ b/arch/x86/kernel/kprobes/common.h
> @@ -67,7 +67,7 @@
> #endif
>
> /* Ensure if the instruction can be boostable */
> -extern int can_boost(kprobe_opcode_t *instruction, void *addr);
> +extern int can_boost(struct insn *insn, void *orig_addr);
> /* Recover instruction if given address is probed */
> extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf,
> unsigned long addr);
> @@ -75,7 +75,7 @@ extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf,
> * Copy an instruction and adjust the displacement if the instruction
> * uses the %rip-relative addressing mode.
> */
> -extern int __copy_instruction(u8 *dest, u8 *src);
> +extern int __copy_instruction(u8 *dest, u8 *src, struct insn *insn);
>
> /* Generate a relative-jump/call instruction */
> extern void synthesize_reljump(void *from, void *to);
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index a9ae61a..de7475b 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -164,33 +164,29 @@ static kprobe_opcode_t *skip_prefixes(kprobe_opcode_t *insn)
> NOKPROBE_SYMBOL(skip_prefixes);
>
> /*
> - * Returns non-zero if opcode is boostable.
> + * Returns non-zero if INSN is boostable.
> * RIP relative instructions are adjusted at copying time in 64 bits mode
> */
> -int can_boost(kprobe_opcode_t *opcodes, void *addr)
> +int can_boost(struct insn *insn, void *addr)
> {
> - struct insn insn;
> kprobe_opcode_t opcode;
>
> if (search_exception_tables((unsigned long)addr))
> return 0; /* Page fault may occur on this address. */
>
> - kernel_insn_init(&insn, (void *)opcodes, MAX_INSN_SIZE);
> - insn_get_opcode(&insn);
> -
> /* 2nd-byte opcode */
> - if (insn.opcode.nbytes == 2)
> - return test_bit(insn.opcode.bytes[1],
> + if (insn->opcode.nbytes == 2)
> + return test_bit(insn->opcode.bytes[1],
> (unsigned long *)twobyte_is_boostable);
>
> - if (insn.opcode.nbytes != 1)
> + if (insn.opcode->nbytes != 1)
Oops, here is insn->opcode.nbytes, not insn.opcode->nbytes.
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists