lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180129171319.GG2228@hirez.programming.kicks-ass.net>
Date:   Mon, 29 Jan 2018 18:13:19 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Borislav Petkov <bp@...en8.de>
Cc:     David Woodhouse <dwmw2@...radead.org>, X86 ML <x86@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        tim.c.chen@...ux.intel.com, pjt@...gle.com, jikos@...nel.org,
        gregkh@...ux-foundation.org, dave.hansen@...el.com,
        riel@...hat.com, luto@...capital.net,
        torvalds@...ux-foundation.org, ak@...ux.intel.com,
        keescook@...gle.com
Subject: Re: [PATCH v2 1/2] x86/retpoline: Simplify vmexit_fill_RSB()

On Fri, Jan 26, 2018 at 09:07:25PM +0100, Borislav Petkov wrote:
> +.macro FILL_RETURN_BUFFER nr:req ftr:req
>  #ifdef CONFIG_RETPOLINE
> +	ALTERNATIVE "", "call __clear_rsb", \ftr
>  #endif
>  .endm
>  
> @@ -206,15 +174,10 @@ extern char __indirect_thunk_end[];
>  static inline void vmexit_fill_RSB(void)
>  {
>  #ifdef CONFIG_RETPOLINE
> +	alternative_input("",
> +			  "call __fill_rsb",
> +			  X86_FEATURE_RETPOLINE,
> +			  ASM_NO_INPUT_CLOBBER("memory"));
>  #endif
>  }
>  


> @@ -19,6 +20,37 @@ ENDPROC(__x86_indirect_thunk_\reg)
>  .endm
>  
>  /*
> + * Google experimented with loop-unrolling and this turned out to be
> + * the optimal version — two calls, each with their own speculation
> + * trap should their return address end up getting used, in a loop.
> + */
> +.macro BOINK_RSB nr:req sp:req
> +	push %_ASM_AX
> +	mov	$(\nr / 2), %_ASM_AX
> +	.align 16
> +771:
> +	call	772f
> +773:						/* speculation trap */
> +	pause
> +	lfence
> +	jmp	773b
> +	.align 16
> +772:
> +	call	774f
> +775:						/* speculation trap */
> +	pause
> +	lfence
> +	jmp	775b
> +	.align 16
> +774:
> +	dec	%_ASM_AX
> +	jnz	771b
> +	add	$((BITS_PER_LONG/8) * \nr), \sp
> +	pop %_ASM_AX
> +.endm
> +
> +
> +/*
>   * Despite being an assembler file we can't just use .irp here
>   * because __KSYM_DEPS__ only uses the C preprocessor and would
>   * only see one instance of "__x86_indirect_thunk_\reg" rather
> @@ -46,3 +78,15 @@ GENERATE_THUNK(r13)
>  GENERATE_THUNK(r14)
>  GENERATE_THUNK(r15)
>  #endif
> +
> +ENTRY(__fill_rsb)
> +	BOINK_RSB RSB_FILL_LOOPS, %_ASM_SP
> +	ret
> +END(__fill_rsb)
> +EXPORT_SYMBOL_GPL(__fill_rsb)
> +
> +ENTRY(__clear_rsb)
> +	BOINK_RSB RSB_CLEAR_LOOPS, %_ASM_SP
> +	ret
> +END(__clear_rsb)
> +EXPORT_SYMBOL_GPL(__clear_rsb)


One thing I feel this ought to mention (in the Changelog probably) is
that it looses one RET for SKL+. That is, where we used to have 16
'safe' RETs before this, we now have 15.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ