lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240204210932.bd112a37dd3c276b046f6b16@kernel.org>
Date: Sun, 4 Feb 2024 21:09:32 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Jinghao Jia <jinghao7@...inois.edu>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
 Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
 x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>, Peter Zijlstra
 <peterz@...radead.org>, Xin Li <xin@...or.com>,
 linux-trace-kernel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] x86/kprobes: Boost more instructions from
 grp2/3/4/5

On Sat,  3 Feb 2024 21:13:00 -0600
Jinghao Jia <jinghao7@...inois.edu> wrote:

> With the instruction decoder, we are now able to decode and recognize
> instructions with opcode extensions. There are more instructions in
> these groups that can be boosted:
> 
> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
> Group 4: INC, DEC (byte operation)
> Group 5: INC, DEC (word/doubleword/quadword operation)
> 
> These instructions are not boosted previously because there are reserved
> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
> unmapped. As a result, kprobes attached to them requires two int3 traps
> as being non-boostable also prevents jump-optimization.
> 
> Some simple tests on QEMU show that after boosting and jump-optimization
> a single kprobe on these instructions with an empty pre-handler runs 10x
> faster (~1000 cycles vs. ~100 cycles).
> 
> Since these instructions are mostly ALU operations and do not touch
> special registers like RIP, let's boost them so that we get the
> performance benefit.
> 

This looks good to me. And can you check how many instructions in the
vmlinux will be covered by this change typically?

Thank you,

> Signed-off-by: Jinghao Jia <jinghao7@...inois.edu>
> ---
>  arch/x86/kernel/kprobes/core.c | 23 +++++++++++++++++------
>  1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 7a08d6a486c8..530f6d4b34f4 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -169,22 +169,33 @@ bool can_boost(struct insn *insn, void *addr)
>  	case 0x62:		/* bound */
>  	case 0x70 ... 0x7f:	/* Conditional jumps */
>  	case 0x9a:		/* Call far */
> -	case 0xc0 ... 0xc1:	/* Grp2 */
>  	case 0xcc ... 0xce:	/* software exceptions */
> -	case 0xd0 ... 0xd3:	/* Grp2 */
>  	case 0xd6:		/* (UD) */
>  	case 0xd8 ... 0xdf:	/* ESC */
>  	case 0xe0 ... 0xe3:	/* LOOP*, JCXZ */
>  	case 0xe8 ... 0xe9:	/* near Call, JMP */
>  	case 0xeb:		/* Short JMP */
>  	case 0xf0 ... 0xf4:	/* LOCK/REP, HLT */
> -	case 0xf6 ... 0xf7:	/* Grp3 */
> -	case 0xfe:		/* Grp4 */
>  		/* ... are not boostable */
>  		return false;
> +	case 0xc0 ... 0xc1:	/* Grp2 */
> +	case 0xd0 ... 0xd3:	/* Grp2 */
> +		/*
> +		 * AMD uses nnn == 110 as SHL/SAL, but Intel makes it reserved.
> +		 */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b110;
> +	case 0xf6 ... 0xf7:	/* Grp3 */
> +		/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) != 0b001;
> +	case 0xfe:		/* Grp4 */
> +		/* Only INC and DEC are boostable */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001;
>  	case 0xff:		/* Grp5 */
> -		/* Only indirect jmp is boostable */
> -		return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> +		/* Only INC, DEC, and indirect JMP are boostable */
> +		return X86_MODRM_REG(insn->modrm.bytes[0]) == 0b000 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b001 ||
> +		       X86_MODRM_REG(insn->modrm.bytes[0]) == 0b100;
>  	default:
>  		return true;
>  	}
> -- 
> 2.43.0
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ