lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230711213311.03b02a936aefbcf5f06b6c3b@kernel.org>
Date:   Tue, 11 Jul 2023 21:33:11 +0900
From:   Masami Hiramatsu (Google) <mhiramat@...nel.org>
To:     Petr Pavlu <petr.pavlu@...e.com>
Cc:     tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com, hpa@...or.com, peterz@...radead.org,
        samitolvanen@...gle.com, x86@...nel.org,
        linux-trace-kernel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] x86/retpoline,kprobes: Skip optprobe check for
 indirect jumps with retpolines and IBT

On Tue, 11 Jul 2023 11:19:52 +0200
Petr Pavlu <petr.pavlu@...e.com> wrote:

> The kprobes optimization check can_optimize() calls
> insn_is_indirect_jump() to detect indirect jump instructions in
> a target function. If any is found, creating an optprobe is disallowed
> in the function because the jump could be from a jump table and could
> potentially land in the middle of the target optprobe.
> 
> With retpolines, insn_is_indirect_jump() additionally looks for calls to
> indirect thunks which the compiler potentially used to replace original
> jumps. This extra check is however unnecessary because jump tables are
> disabled when the kernel is built with retpolines. The same is currently
> the case with IBT.
> 
> Based on this observation, remove the logic to look for calls to
> indirect thunks and skip the check for indirect jumps altogether if the
> kernel is built with retpolines or IBT. Remove subsequently the symbols
> __indirect_thunk_start and __indirect_thunk_end which are no longer
> needed.
> 
> Dropping this logic indirectly fixes a problem where the range
> [__indirect_thunk_start, __indirect_thunk_end] wrongly included also the
> return thunk. It caused that machines which used the return thunk as
> a mitigation and didn't have it patched by any alternative ended up not
> being able to use optprobes in any regular function.

This looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>

Thanks!

> 
> Fixes: 0b53c374b9ef ("x86/retpoline: Use -mfunction-return")
> Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Suggested-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
> Signed-off-by: Petr Pavlu <petr.pavlu@...e.com>
> ---
>  arch/x86/include/asm/nospec-branch.h |  3 ---
>  arch/x86/kernel/kprobes/opt.c        | 40 +++++++++++-----------------
>  arch/x86/kernel/vmlinux.lds.S        |  2 --
>  tools/perf/util/thread-stack.c       |  4 +--
>  4 files changed, 17 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 55388c9f7601..c5460be93fa7 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -461,9 +461,6 @@ enum ssb_mitigation {
>  	SPEC_STORE_BYPASS_SECCOMP,
>  };
>  
> -extern char __indirect_thunk_start[];
> -extern char __indirect_thunk_end[];
> -
>  static __always_inline
>  void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
>  {
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index 57b0037d0a99..517821b48391 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -226,7 +226,7 @@ static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real)
>  }
>  
>  /* Check whether insn is indirect jump */
> -static int __insn_is_indirect_jump(struct insn *insn)
> +static int insn_is_indirect_jump(struct insn *insn)
>  {
>  	return ((insn->opcode.bytes[0] == 0xff &&
>  		(X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */
> @@ -260,26 +260,6 @@ static int insn_jump_into_range(struct insn *insn, unsigned long start, int len)
>  	return (start <= target && target <= start + len);
>  }
>  
> -static int insn_is_indirect_jump(struct insn *insn)
> -{
> -	int ret = __insn_is_indirect_jump(insn);
> -
> -#ifdef CONFIG_RETPOLINE
> -	/*
> -	 * Jump to x86_indirect_thunk_* is treated as an indirect jump.
> -	 * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with
> -	 * older gcc may use indirect jump. So we add this check instead of
> -	 * replace indirect-jump check.
> -	 */
> -	if (!ret)
> -		ret = insn_jump_into_range(insn,
> -				(unsigned long)__indirect_thunk_start,
> -				(unsigned long)__indirect_thunk_end -
> -				(unsigned long)__indirect_thunk_start);
> -#endif
> -	return ret;
> -}
> -
>  /* Decode whole function to ensure any instructions don't jump into target */
>  static int can_optimize(unsigned long paddr)
>  {
> @@ -334,9 +314,21 @@ static int can_optimize(unsigned long paddr)
>  		/* Recover address */
>  		insn.kaddr = (void *)addr;
>  		insn.next_byte = (void *)(addr + insn.length);
> -		/* Check any instructions don't jump into target */
> -		if (insn_is_indirect_jump(&insn) ||
> -		    insn_jump_into_range(&insn, paddr + INT3_INSN_SIZE,
> +		/*
> +		 * Check any instructions don't jump into target, indirectly or
> +		 * directly.
> +		 *
> +		 * The indirect case is present to handle a code with jump
> +		 * tables. When the kernel uses retpolines, the check should in
> +		 * theory additionally look for jumps to indirect thunks.
> +		 * However, the kernel built with retpolines or IBT has jump
> +		 * tables disabled so the check can be skipped altogether.
> +		 */
> +		if (!IS_ENABLED(CONFIG_RETPOLINE) &&
> +		    !IS_ENABLED(CONFIG_X86_KERNEL_IBT) &&
> +		    insn_is_indirect_jump(&insn))
> +			return 0;
> +		if (insn_jump_into_range(&insn, paddr + INT3_INSN_SIZE,
>  					 DISP32_SIZE))
>  			return 0;
>  		addr += insn.length;
> diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
> index a4cd04c458df..dd5b0a68cf84 100644
> --- a/arch/x86/kernel/vmlinux.lds.S
> +++ b/arch/x86/kernel/vmlinux.lds.S
> @@ -133,9 +133,7 @@ SECTIONS
>  		KPROBES_TEXT
>  		SOFTIRQENTRY_TEXT
>  #ifdef CONFIG_RETPOLINE
> -		__indirect_thunk_start = .;
>  		*(.text..__x86.*)
> -		__indirect_thunk_end = .;
>  #endif
>  		STATIC_CALL_TEXT
>  
> diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
> index 374d142e7390..c6a0a27b12c2 100644
> --- a/tools/perf/util/thread-stack.c
> +++ b/tools/perf/util/thread-stack.c
> @@ -1038,9 +1038,7 @@ static int thread_stack__trace_end(struct thread_stack *ts,
>  
>  static bool is_x86_retpoline(const char *name)
>  {
> -	const char *p = strstr(name, "__x86_indirect_thunk_");
> -
> -	return p == name || !strcmp(name, "__indirect_thunk_start");
> +	return strstr(name, "__x86_indirect_thunk_") == name;
>  }
>  
>  /*
> -- 
> 2.35.3
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ